00:00:00.000 Started by upstream project "autotest-per-patch" build number 126174 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.035 Fetching changes from the remote Git repository 00:00:00.037 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.060 Using shallow fetch with depth 1 00:00:00.061 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.061 > git --version # timeout=10 00:00:00.084 > git --version # 'git version 2.39.2' 00:00:00.084 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.115 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.115 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.575 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.586 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.596 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:02.596 > git config core.sparsecheckout # timeout=10 00:00:02.607 > git read-tree -mu HEAD # timeout=10 00:00:02.621 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:02.645 Commit message: "inventory: add WCP3 to free inventory" 00:00:02.646 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:02.730 [Pipeline] Start of Pipeline 00:00:02.744 [Pipeline] library 00:00:02.746 Loading library shm_lib@master 00:00:02.746 Library shm_lib@master is cached. Copying from home. 00:00:02.764 [Pipeline] node 00:00:02.780 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:02.782 [Pipeline] { 00:00:02.792 [Pipeline] catchError 00:00:02.793 [Pipeline] { 00:00:02.809 [Pipeline] wrap 00:00:02.820 [Pipeline] { 00:00:02.826 [Pipeline] stage 00:00:02.828 [Pipeline] { (Prologue) 00:00:03.065 [Pipeline] sh 00:00:03.347 + logger -p user.info -t JENKINS-CI 00:00:03.366 [Pipeline] echo 00:00:03.368 Node: WFP39 00:00:03.376 [Pipeline] sh 00:00:03.674 [Pipeline] setCustomBuildProperty 00:00:03.688 [Pipeline] echo 00:00:03.689 Cleanup processes 00:00:03.695 [Pipeline] sh 00:00:03.977 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.977 4034116 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:03.987 [Pipeline] sh 00:00:04.264 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.264 ++ grep -v 'sudo pgrep' 00:00:04.264 ++ awk '{print $1}' 00:00:04.264 + sudo kill -9 00:00:04.264 + true 00:00:04.278 [Pipeline] cleanWs 00:00:04.288 [WS-CLEANUP] Deleting project workspace... 00:00:04.288 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.293 [WS-CLEANUP] done 00:00:04.297 [Pipeline] setCustomBuildProperty 00:00:04.305 [Pipeline] sh 00:00:04.617 + sudo git config --global --replace-all safe.directory '*' 00:00:04.683 [Pipeline] httpRequest 00:00:04.720 [Pipeline] echo 00:00:04.721 Sorcerer 10.211.164.101 is alive 00:00:04.727 [Pipeline] httpRequest 00:00:04.731 HttpMethod: GET 00:00:04.732 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.732 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.750 Response Code: HTTP/1.1 200 OK 00:00:04.751 Success: Status code 200 is in the accepted range: 200,404 00:00:04.751 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:19.470 [Pipeline] sh 00:00:19.751 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:19.768 [Pipeline] httpRequest 00:00:19.790 [Pipeline] echo 00:00:19.792 Sorcerer 10.211.164.101 is alive 00:00:19.802 [Pipeline] httpRequest 00:00:19.807 HttpMethod: GET 00:00:19.808 URL: http://10.211.164.101/packages/spdk_dff473c1db6d35e48682552278c9481c99896d36.tar.gz 00:00:19.808 Sending request to url: http://10.211.164.101/packages/spdk_dff473c1db6d35e48682552278c9481c99896d36.tar.gz 00:00:19.815 Response Code: HTTP/1.1 200 OK 00:00:19.815 Success: Status code 200 is in the accepted range: 200,404 00:00:19.816 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_dff473c1db6d35e48682552278c9481c99896d36.tar.gz 00:01:58.425 [Pipeline] sh 00:01:58.707 + tar --no-same-owner -xf spdk_dff473c1db6d35e48682552278c9481c99896d36.tar.gz 00:02:01.250 [Pipeline] sh 00:02:01.535 + git -C spdk log --oneline -n5 00:02:01.535 dff473c1d nvmf: add nvmf_update_mdns_prr 00:02:01.535 0d53b57ed nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:02:01.535 719d03c6a sock/uring: only register net impl if supported 00:02:01.535 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:02:01.535 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:02:01.547 [Pipeline] } 00:02:01.566 [Pipeline] // stage 00:02:01.575 [Pipeline] stage 00:02:01.576 [Pipeline] { (Prepare) 00:02:01.590 [Pipeline] writeFile 00:02:01.603 [Pipeline] sh 00:02:01.887 + logger -p user.info -t JENKINS-CI 00:02:01.901 [Pipeline] sh 00:02:02.184 + logger -p user.info -t JENKINS-CI 00:02:02.199 [Pipeline] sh 00:02:02.482 + cat autorun-spdk.conf 00:02:02.483 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.483 SPDK_TEST_FUZZER_SHORT=1 00:02:02.483 SPDK_TEST_FUZZER=1 00:02:02.483 SPDK_RUN_UBSAN=1 00:02:02.489 RUN_NIGHTLY=0 00:02:02.496 [Pipeline] readFile 00:02:02.530 [Pipeline] withEnv 00:02:02.532 [Pipeline] { 00:02:02.548 [Pipeline] sh 00:02:02.861 + set -ex 00:02:02.861 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:02:02.861 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:02.861 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.861 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:02.861 ++ SPDK_TEST_FUZZER=1 00:02:02.861 ++ SPDK_RUN_UBSAN=1 00:02:02.861 ++ RUN_NIGHTLY=0 00:02:02.861 + case $SPDK_TEST_NVMF_NICS in 00:02:02.861 + DRIVERS= 00:02:02.861 + [[ -n '' ]] 00:02:02.861 + exit 0 00:02:02.868 [Pipeline] } 00:02:02.884 [Pipeline] // withEnv 00:02:02.889 [Pipeline] } 00:02:02.905 [Pipeline] // stage 00:02:02.914 [Pipeline] catchError 00:02:02.916 [Pipeline] { 00:02:02.930 [Pipeline] timeout 00:02:02.931 Timeout set to expire in 30 min 00:02:02.932 [Pipeline] { 00:02:02.948 [Pipeline] stage 00:02:02.950 [Pipeline] { (Tests) 00:02:02.966 [Pipeline] sh 00:02:03.248 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:03.248 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:03.248 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:02:03.248 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:02:03.248 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:03.248 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:03.248 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:02:03.248 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:03.248 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:03.248 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:03.248 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:02:03.248 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:03.248 + source /etc/os-release 00:02:03.248 ++ NAME='Fedora Linux' 00:02:03.248 ++ VERSION='38 (Cloud Edition)' 00:02:03.248 ++ ID=fedora 00:02:03.248 ++ VERSION_ID=38 00:02:03.248 ++ VERSION_CODENAME= 00:02:03.248 ++ PLATFORM_ID=platform:f38 00:02:03.248 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:03.248 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.248 ++ LOGO=fedora-logo-icon 00:02:03.248 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:03.249 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.249 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:03.249 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.249 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.249 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.249 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:03.249 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.249 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:03.249 ++ SUPPORT_END=2024-05-14 00:02:03.249 ++ VARIANT='Cloud Edition' 00:02:03.249 ++ VARIANT_ID=cloud 00:02:03.249 + uname -a 00:02:03.249 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:02:03.249 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:06.536 Hugepages 00:02:06.536 node hugesize free / total 00:02:06.536 node0 1048576kB 0 / 0 00:02:06.536 node0 2048kB 0 / 0 00:02:06.536 node1 1048576kB 0 / 0 00:02:06.536 node1 2048kB 0 / 0 00:02:06.536 00:02:06.536 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.536 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:06.536 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:06.536 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:06.536 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:06.536 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:06.536 + rm -f /tmp/spdk-ld-path 00:02:06.536 + source autorun-spdk.conf 00:02:06.536 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.536 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:06.536 ++ SPDK_TEST_FUZZER=1 00:02:06.536 ++ SPDK_RUN_UBSAN=1 00:02:06.536 ++ RUN_NIGHTLY=0 00:02:06.536 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.536 + [[ -n '' ]] 00:02:06.536 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:06.536 + for M in /var/spdk/build-*-manifest.txt 00:02:06.536 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.536 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:06.536 + for M in /var/spdk/build-*-manifest.txt 00:02:06.536 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.536 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:06.536 ++ uname 00:02:06.536 + [[ Linux == \L\i\n\u\x ]] 00:02:06.536 + sudo dmesg -T 00:02:06.536 + sudo dmesg --clear 00:02:06.536 + dmesg_pid=4035598 00:02:06.536 + [[ Fedora Linux == FreeBSD ]] 00:02:06.536 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.536 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.536 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.536 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.536 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.536 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.536 + sudo dmesg -Tw 00:02:06.536 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.536 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.536 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.536 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.536 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.536 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.536 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.536 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.536 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:06.536 Test configuration: 00:02:06.536 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.536 SPDK_TEST_FUZZER_SHORT=1 00:02:06.536 SPDK_TEST_FUZZER=1 00:02:06.536 SPDK_RUN_UBSAN=1 00:02:06.536 RUN_NIGHTLY=0 12:19:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:06.536 12:19:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.536 12:19:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.536 12:19:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.536 12:19:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.536 12:19:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.536 12:19:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.536 12:19:01 -- paths/export.sh@5 -- $ export PATH 00:02:06.536 12:19:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.536 12:19:01 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:06.536 12:19:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:06.536 12:19:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721038741.XXXXXX 00:02:06.536 12:19:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721038741.eQx5t4 00:02:06.536 12:19:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:06.536 12:19:01 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:06.536 12:19:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:02:06.536 12:19:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:06.537 12:19:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.537 12:19:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:06.537 12:19:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:06.537 12:19:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.537 12:19:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:06.537 12:19:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:06.537 12:19:01 -- pm/common@17 -- $ local monitor 00:02:06.537 12:19:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.537 12:19:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.537 12:19:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.537 12:19:01 -- pm/common@21 -- $ date +%s 00:02:06.537 12:19:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.537 12:19:01 -- pm/common@21 -- $ date +%s 00:02:06.537 12:19:01 -- pm/common@25 -- $ sleep 1 00:02:06.537 12:19:01 -- pm/common@21 -- $ date +%s 00:02:06.537 12:19:01 -- pm/common@21 -- $ date +%s 00:02:06.537 12:19:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721038741 00:02:06.537 12:19:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721038741 00:02:06.537 12:19:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721038741 00:02:06.537 12:19:01 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721038741 00:02:06.537 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721038741_collect-vmstat.pm.log 00:02:06.537 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721038741_collect-cpu-load.pm.log 00:02:06.537 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721038741_collect-cpu-temp.pm.log 00:02:06.796 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721038741_collect-bmc-pm.bmc.pm.log 00:02:07.733 12:19:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:07.733 12:19:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.733 12:19:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.733 12:19:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:07.733 12:19:02 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.733 Mon Jul 15 10:19:02 AM UTC 2024 00:02:07.733 12:19:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.733 v24.09-pre-204-gdff473c1d 00:02:07.733 12:19:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:07.733 12:19:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.733 12:19:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.733 12:19:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:07.733 12:19:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.733 12:19:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.733 ************************************ 00:02:07.733 START TEST ubsan 00:02:07.733 ************************************ 00:02:07.733 12:19:02 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:07.733 using ubsan 00:02:07.733 00:02:07.733 real 0m0.001s 00:02:07.733 user 0m0.000s 00:02:07.733 sys 0m0.000s 00:02:07.733 12:19:02 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:07.733 12:19:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.733 ************************************ 00:02:07.733 END TEST ubsan 00:02:07.733 ************************************ 00:02:07.733 12:19:02 -- common/autotest_common.sh@1142 -- $ return 0 00:02:07.733 12:19:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:07.733 12:19:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:07.733 12:19:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:07.733 12:19:02 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:07.733 12:19:02 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:07.733 12:19:02 -- common/autobuild_common.sh@432 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:07.733 12:19:02 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:07.733 12:19:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:07.733 12:19:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.733 ************************************ 00:02:07.733 START TEST autobuild_llvm_precompile 00:02:07.733 ************************************ 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:02:07.733 Target: x86_64-redhat-linux-gnu 00:02:07.733 Thread model: posix 00:02:07.733 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:02:07.733 12:19:02 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:07.992 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:07.992 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:08.559 Using 'verbs' RDMA provider 00:02:21.696 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:36.570 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:36.570 Creating mk/config.mk...done. 00:02:36.570 Creating mk/cc.flags.mk...done. 00:02:36.570 Type 'make' to build. 00:02:36.570 00:02:36.570 real 0m27.310s 00:02:36.570 user 0m11.934s 00:02:36.570 sys 0m14.672s 00:02:36.570 12:19:30 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:36.570 12:19:30 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:36.570 ************************************ 00:02:36.570 END TEST autobuild_llvm_precompile 00:02:36.570 ************************************ 00:02:36.570 12:19:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:36.570 12:19:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.570 12:19:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.570 12:19:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.570 12:19:30 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:36.570 12:19:30 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:36.570 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:36.570 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:36.570 Using 'verbs' RDMA provider 00:02:49.062 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:01.275 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:01.275 Creating mk/config.mk...done. 00:03:01.275 Creating mk/cc.flags.mk...done. 00:03:01.275 Type 'make' to build. 00:03:01.275 12:19:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:03:01.275 12:19:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:01.275 12:19:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:01.275 12:19:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.275 ************************************ 00:03:01.275 START TEST make 00:03:01.275 ************************************ 00:03:01.275 12:19:55 make -- common/autotest_common.sh@1123 -- $ make -j72 00:03:01.275 make[1]: Nothing to be done for 'all'. 00:03:02.654 The Meson build system 00:03:02.654 Version: 1.3.1 00:03:02.654 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:02.654 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.654 Build type: native build 00:03:02.654 Project name: libvfio-user 00:03:02.654 Project version: 0.0.1 00:03:02.654 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:02.654 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:02.654 Host machine cpu family: x86_64 00:03:02.654 Host machine cpu: x86_64 00:03:02.654 Run-time dependency threads found: YES 00:03:02.654 Library dl found: YES 00:03:02.654 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:02.654 Run-time dependency json-c found: YES 0.17 00:03:02.654 Run-time dependency cmocka found: YES 1.1.7 00:03:02.654 Program pytest-3 found: NO 00:03:02.655 Program flake8 found: NO 00:03:02.655 Program misspell-fixer found: NO 00:03:02.655 Program restructuredtext-lint found: NO 00:03:02.655 Program valgrind found: YES (/usr/bin/valgrind) 00:03:02.655 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.655 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.655 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.655 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:02.655 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:02.655 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:02.655 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:02.655 Build targets in project: 8 00:03:02.655 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:02.655 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:02.655 00:03:02.655 libvfio-user 0.0.1 00:03:02.655 00:03:02.655 User defined options 00:03:02.655 buildtype : debug 00:03:02.655 default_library: static 00:03:02.655 libdir : /usr/local/lib 00:03:02.655 00:03:02.655 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.912 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:02.912 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:02.912 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:02.912 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:02.912 [4/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:02.912 [5/36] Compiling C object samples/null.p/null.c.o 00:03:02.912 [6/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:02.912 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:02.912 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:02.912 [9/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:02.912 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:02.912 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:02.912 [12/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:02.912 [13/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:02.912 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:02.912 [15/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:02.912 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:02.912 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:02.912 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:02.912 [19/36] Compiling C object samples/server.p/server.c.o 00:03:02.912 [20/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:02.912 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:02.912 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:02.912 [23/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:02.912 [24/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:02.912 [25/36] Compiling C object samples/client.p/client.c.o 00:03:02.912 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:02.912 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:02.912 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:03.171 [29/36] Linking static target lib/libvfio-user.a 00:03:03.171 [30/36] Linking target samples/client 00:03:03.171 [31/36] Linking target samples/server 00:03:03.171 [32/36] Linking target samples/lspci 00:03:03.171 [33/36] Linking target samples/null 00:03:03.171 [34/36] Linking target test/unit_tests 00:03:03.171 [35/36] Linking target samples/gpio-pci-idio-16 00:03:03.171 [36/36] Linking target samples/shadow_ioeventfd_server 00:03:03.171 INFO: autodetecting backend as ninja 00:03:03.171 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.171 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.429 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:03.429 ninja: no work to do. 00:03:10.000 The Meson build system 00:03:10.000 Version: 1.3.1 00:03:10.000 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:03:10.000 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:03:10.000 Build type: native build 00:03:10.000 Program cat found: YES (/usr/bin/cat) 00:03:10.000 Project name: DPDK 00:03:10.000 Project version: 24.03.0 00:03:10.000 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:10.000 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:10.001 Host machine cpu family: x86_64 00:03:10.001 Host machine cpu: x86_64 00:03:10.001 Message: ## Building in Developer Mode ## 00:03:10.001 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:10.001 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:10.001 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:10.001 Program python3 found: YES (/usr/bin/python3) 00:03:10.001 Program cat found: YES (/usr/bin/cat) 00:03:10.001 Compiler for C supports arguments -march=native: YES 00:03:10.001 Checking for size of "void *" : 8 00:03:10.001 Checking for size of "void *" : 8 (cached) 00:03:10.001 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:10.001 Library m found: YES 00:03:10.001 Library numa found: YES 00:03:10.001 Has header "numaif.h" : YES 00:03:10.001 Library fdt found: NO 00:03:10.001 Library execinfo found: NO 00:03:10.001 Has header "execinfo.h" : YES 00:03:10.001 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:10.001 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:10.001 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:10.001 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:10.001 Run-time dependency openssl found: YES 3.0.9 00:03:10.001 Run-time dependency libpcap found: YES 1.10.4 00:03:10.001 Has header "pcap.h" with dependency libpcap: YES 00:03:10.001 Compiler for C supports arguments -Wcast-qual: YES 00:03:10.001 Compiler for C supports arguments -Wdeprecated: YES 00:03:10.001 Compiler for C supports arguments -Wformat: YES 00:03:10.001 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:10.001 Compiler for C supports arguments -Wformat-security: YES 00:03:10.001 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.001 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:10.001 Compiler for C supports arguments -Wnested-externs: YES 00:03:10.001 Compiler for C supports arguments -Wold-style-definition: YES 00:03:10.001 Compiler for C supports arguments -Wpointer-arith: YES 00:03:10.001 Compiler for C supports arguments -Wsign-compare: YES 00:03:10.001 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:10.001 Compiler for C supports arguments -Wundef: YES 00:03:10.001 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.001 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:10.001 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:03:10.001 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.001 Program objdump found: YES (/usr/bin/objdump) 00:03:10.001 Compiler for C supports arguments -mavx512f: YES 00:03:10.001 Checking if "AVX512 checking" compiles: YES 00:03:10.001 Fetching value of define "__SSE4_2__" : 1 00:03:10.001 Fetching value of define "__AES__" : 1 00:03:10.001 Fetching value of define "__AVX__" : 1 00:03:10.001 Fetching value of define "__AVX2__" : 1 00:03:10.001 Fetching value of define "__AVX512BW__" : 1 00:03:10.001 Fetching value of define "__AVX512CD__" : 1 00:03:10.001 Fetching value of define "__AVX512DQ__" : 1 00:03:10.001 Fetching value of define "__AVX512F__" : 1 00:03:10.001 Fetching value of define "__AVX512VL__" : 1 00:03:10.001 Fetching value of define "__PCLMUL__" : 1 00:03:10.001 Fetching value of define "__RDRND__" : 1 00:03:10.001 Fetching value of define "__RDSEED__" : 1 00:03:10.001 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:10.001 Fetching value of define "__znver1__" : (undefined) 00:03:10.001 Fetching value of define "__znver2__" : (undefined) 00:03:10.001 Fetching value of define "__znver3__" : (undefined) 00:03:10.001 Fetching value of define "__znver4__" : (undefined) 00:03:10.001 Compiler for C supports arguments -Wno-format-truncation: NO 00:03:10.001 Message: lib/log: Defining dependency "log" 00:03:10.001 Message: lib/kvargs: Defining dependency "kvargs" 00:03:10.001 Message: lib/telemetry: Defining dependency "telemetry" 00:03:10.001 Checking for function "getentropy" : NO 00:03:10.001 Message: lib/eal: Defining dependency "eal" 00:03:10.001 Message: lib/ring: Defining dependency "ring" 00:03:10.001 Message: lib/rcu: Defining dependency "rcu" 00:03:10.001 Message: lib/mempool: Defining dependency "mempool" 00:03:10.001 Message: lib/mbuf: Defining dependency "mbuf" 00:03:10.001 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:10.001 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:10.001 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:10.001 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:10.001 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:10.001 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:10.001 Compiler for C supports arguments -mpclmul: YES 00:03:10.001 Compiler for C supports arguments -maes: YES 00:03:10.001 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.001 Compiler for C supports arguments -mavx512bw: YES 00:03:10.001 Compiler for C supports arguments -mavx512dq: YES 00:03:10.001 Compiler for C supports arguments -mavx512vl: YES 00:03:10.001 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:10.001 Compiler for C supports arguments -mavx2: YES 00:03:10.001 Compiler for C supports arguments -mavx: YES 00:03:10.001 Message: lib/net: Defining dependency "net" 00:03:10.001 Message: lib/meter: Defining dependency "meter" 00:03:10.001 Message: lib/ethdev: Defining dependency "ethdev" 00:03:10.001 Message: lib/pci: Defining dependency "pci" 00:03:10.001 Message: lib/cmdline: Defining dependency "cmdline" 00:03:10.001 Message: lib/hash: Defining dependency "hash" 00:03:10.001 Message: lib/timer: Defining dependency "timer" 00:03:10.001 Message: lib/compressdev: Defining dependency "compressdev" 00:03:10.001 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:10.001 Message: lib/dmadev: Defining dependency "dmadev" 00:03:10.001 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:10.001 Message: lib/power: Defining dependency "power" 00:03:10.001 Message: lib/reorder: Defining dependency "reorder" 00:03:10.001 Message: lib/security: Defining dependency "security" 00:03:10.001 Has header "linux/userfaultfd.h" : YES 00:03:10.001 Has header "linux/vduse.h" : YES 00:03:10.001 Message: lib/vhost: Defining dependency "vhost" 00:03:10.001 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:03:10.001 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:10.001 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:10.001 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:10.001 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:10.001 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:10.001 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:10.001 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:10.001 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:10.001 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:10.001 Program doxygen found: YES (/usr/bin/doxygen) 00:03:10.001 Configuring doxy-api-html.conf using configuration 00:03:10.001 Configuring doxy-api-man.conf using configuration 00:03:10.001 Program mandb found: YES (/usr/bin/mandb) 00:03:10.001 Program sphinx-build found: NO 00:03:10.001 Configuring rte_build_config.h using configuration 00:03:10.001 Message: 00:03:10.001 ================= 00:03:10.001 Applications Enabled 00:03:10.001 ================= 00:03:10.001 00:03:10.001 apps: 00:03:10.001 00:03:10.001 00:03:10.001 Message: 00:03:10.001 ================= 00:03:10.001 Libraries Enabled 00:03:10.001 ================= 00:03:10.001 00:03:10.001 libs: 00:03:10.001 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:10.001 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:10.001 cryptodev, dmadev, power, reorder, security, vhost, 00:03:10.001 00:03:10.001 Message: 00:03:10.001 =============== 00:03:10.001 Drivers Enabled 00:03:10.001 =============== 00:03:10.001 00:03:10.001 common: 00:03:10.001 00:03:10.001 bus: 00:03:10.001 pci, vdev, 00:03:10.001 mempool: 00:03:10.001 ring, 00:03:10.001 dma: 00:03:10.001 00:03:10.001 net: 00:03:10.001 00:03:10.001 crypto: 00:03:10.001 00:03:10.001 compress: 00:03:10.001 00:03:10.001 vdpa: 00:03:10.001 00:03:10.001 00:03:10.001 Message: 00:03:10.001 ================= 00:03:10.001 Content Skipped 00:03:10.001 ================= 00:03:10.001 00:03:10.001 apps: 00:03:10.001 dumpcap: explicitly disabled via build config 00:03:10.001 graph: explicitly disabled via build config 00:03:10.001 pdump: explicitly disabled via build config 00:03:10.001 proc-info: explicitly disabled via build config 00:03:10.001 test-acl: explicitly disabled via build config 00:03:10.001 test-bbdev: explicitly disabled via build config 00:03:10.001 test-cmdline: explicitly disabled via build config 00:03:10.001 test-compress-perf: explicitly disabled via build config 00:03:10.001 test-crypto-perf: explicitly disabled via build config 00:03:10.001 test-dma-perf: explicitly disabled via build config 00:03:10.001 test-eventdev: explicitly disabled via build config 00:03:10.001 test-fib: explicitly disabled via build config 00:03:10.001 test-flow-perf: explicitly disabled via build config 00:03:10.001 test-gpudev: explicitly disabled via build config 00:03:10.001 test-mldev: explicitly disabled via build config 00:03:10.001 test-pipeline: explicitly disabled via build config 00:03:10.001 test-pmd: explicitly disabled via build config 00:03:10.001 test-regex: explicitly disabled via build config 00:03:10.001 test-sad: explicitly disabled via build config 00:03:10.001 test-security-perf: explicitly disabled via build config 00:03:10.001 00:03:10.001 libs: 00:03:10.001 argparse: explicitly disabled via build config 00:03:10.002 metrics: explicitly disabled via build config 00:03:10.002 acl: explicitly disabled via build config 00:03:10.002 bbdev: explicitly disabled via build config 00:03:10.002 bitratestats: explicitly disabled via build config 00:03:10.002 bpf: explicitly disabled via build config 00:03:10.002 cfgfile: explicitly disabled via build config 00:03:10.002 distributor: explicitly disabled via build config 00:03:10.002 efd: explicitly disabled via build config 00:03:10.002 eventdev: explicitly disabled via build config 00:03:10.002 dispatcher: explicitly disabled via build config 00:03:10.002 gpudev: explicitly disabled via build config 00:03:10.002 gro: explicitly disabled via build config 00:03:10.002 gso: explicitly disabled via build config 00:03:10.002 ip_frag: explicitly disabled via build config 00:03:10.002 jobstats: explicitly disabled via build config 00:03:10.002 latencystats: explicitly disabled via build config 00:03:10.002 lpm: explicitly disabled via build config 00:03:10.002 member: explicitly disabled via build config 00:03:10.002 pcapng: explicitly disabled via build config 00:03:10.002 rawdev: explicitly disabled via build config 00:03:10.002 regexdev: explicitly disabled via build config 00:03:10.002 mldev: explicitly disabled via build config 00:03:10.002 rib: explicitly disabled via build config 00:03:10.002 sched: explicitly disabled via build config 00:03:10.002 stack: explicitly disabled via build config 00:03:10.002 ipsec: explicitly disabled via build config 00:03:10.002 pdcp: explicitly disabled via build config 00:03:10.002 fib: explicitly disabled via build config 00:03:10.002 port: explicitly disabled via build config 00:03:10.002 pdump: explicitly disabled via build config 00:03:10.002 table: explicitly disabled via build config 00:03:10.002 pipeline: explicitly disabled via build config 00:03:10.002 graph: explicitly disabled via build config 00:03:10.002 node: explicitly disabled via build config 00:03:10.002 00:03:10.002 drivers: 00:03:10.002 common/cpt: not in enabled drivers build config 00:03:10.002 common/dpaax: not in enabled drivers build config 00:03:10.002 common/iavf: not in enabled drivers build config 00:03:10.002 common/idpf: not in enabled drivers build config 00:03:10.002 common/ionic: not in enabled drivers build config 00:03:10.002 common/mvep: not in enabled drivers build config 00:03:10.002 common/octeontx: not in enabled drivers build config 00:03:10.002 bus/auxiliary: not in enabled drivers build config 00:03:10.002 bus/cdx: not in enabled drivers build config 00:03:10.002 bus/dpaa: not in enabled drivers build config 00:03:10.002 bus/fslmc: not in enabled drivers build config 00:03:10.002 bus/ifpga: not in enabled drivers build config 00:03:10.002 bus/platform: not in enabled drivers build config 00:03:10.002 bus/uacce: not in enabled drivers build config 00:03:10.002 bus/vmbus: not in enabled drivers build config 00:03:10.002 common/cnxk: not in enabled drivers build config 00:03:10.002 common/mlx5: not in enabled drivers build config 00:03:10.002 common/nfp: not in enabled drivers build config 00:03:10.002 common/nitrox: not in enabled drivers build config 00:03:10.002 common/qat: not in enabled drivers build config 00:03:10.002 common/sfc_efx: not in enabled drivers build config 00:03:10.002 mempool/bucket: not in enabled drivers build config 00:03:10.002 mempool/cnxk: not in enabled drivers build config 00:03:10.002 mempool/dpaa: not in enabled drivers build config 00:03:10.002 mempool/dpaa2: not in enabled drivers build config 00:03:10.002 mempool/octeontx: not in enabled drivers build config 00:03:10.002 mempool/stack: not in enabled drivers build config 00:03:10.002 dma/cnxk: not in enabled drivers build config 00:03:10.002 dma/dpaa: not in enabled drivers build config 00:03:10.002 dma/dpaa2: not in enabled drivers build config 00:03:10.002 dma/hisilicon: not in enabled drivers build config 00:03:10.002 dma/idxd: not in enabled drivers build config 00:03:10.002 dma/ioat: not in enabled drivers build config 00:03:10.002 dma/skeleton: not in enabled drivers build config 00:03:10.002 net/af_packet: not in enabled drivers build config 00:03:10.002 net/af_xdp: not in enabled drivers build config 00:03:10.002 net/ark: not in enabled drivers build config 00:03:10.002 net/atlantic: not in enabled drivers build config 00:03:10.002 net/avp: not in enabled drivers build config 00:03:10.002 net/axgbe: not in enabled drivers build config 00:03:10.002 net/bnx2x: not in enabled drivers build config 00:03:10.002 net/bnxt: not in enabled drivers build config 00:03:10.002 net/bonding: not in enabled drivers build config 00:03:10.002 net/cnxk: not in enabled drivers build config 00:03:10.002 net/cpfl: not in enabled drivers build config 00:03:10.002 net/cxgbe: not in enabled drivers build config 00:03:10.002 net/dpaa: not in enabled drivers build config 00:03:10.002 net/dpaa2: not in enabled drivers build config 00:03:10.002 net/e1000: not in enabled drivers build config 00:03:10.002 net/ena: not in enabled drivers build config 00:03:10.002 net/enetc: not in enabled drivers build config 00:03:10.002 net/enetfec: not in enabled drivers build config 00:03:10.002 net/enic: not in enabled drivers build config 00:03:10.002 net/failsafe: not in enabled drivers build config 00:03:10.002 net/fm10k: not in enabled drivers build config 00:03:10.002 net/gve: not in enabled drivers build config 00:03:10.002 net/hinic: not in enabled drivers build config 00:03:10.002 net/hns3: not in enabled drivers build config 00:03:10.002 net/i40e: not in enabled drivers build config 00:03:10.002 net/iavf: not in enabled drivers build config 00:03:10.002 net/ice: not in enabled drivers build config 00:03:10.002 net/idpf: not in enabled drivers build config 00:03:10.002 net/igc: not in enabled drivers build config 00:03:10.002 net/ionic: not in enabled drivers build config 00:03:10.002 net/ipn3ke: not in enabled drivers build config 00:03:10.002 net/ixgbe: not in enabled drivers build config 00:03:10.002 net/mana: not in enabled drivers build config 00:03:10.002 net/memif: not in enabled drivers build config 00:03:10.002 net/mlx4: not in enabled drivers build config 00:03:10.002 net/mlx5: not in enabled drivers build config 00:03:10.002 net/mvneta: not in enabled drivers build config 00:03:10.002 net/mvpp2: not in enabled drivers build config 00:03:10.002 net/netvsc: not in enabled drivers build config 00:03:10.002 net/nfb: not in enabled drivers build config 00:03:10.002 net/nfp: not in enabled drivers build config 00:03:10.002 net/ngbe: not in enabled drivers build config 00:03:10.002 net/null: not in enabled drivers build config 00:03:10.002 net/octeontx: not in enabled drivers build config 00:03:10.002 net/octeon_ep: not in enabled drivers build config 00:03:10.002 net/pcap: not in enabled drivers build config 00:03:10.002 net/pfe: not in enabled drivers build config 00:03:10.002 net/qede: not in enabled drivers build config 00:03:10.002 net/ring: not in enabled drivers build config 00:03:10.002 net/sfc: not in enabled drivers build config 00:03:10.002 net/softnic: not in enabled drivers build config 00:03:10.002 net/tap: not in enabled drivers build config 00:03:10.002 net/thunderx: not in enabled drivers build config 00:03:10.002 net/txgbe: not in enabled drivers build config 00:03:10.002 net/vdev_netvsc: not in enabled drivers build config 00:03:10.002 net/vhost: not in enabled drivers build config 00:03:10.002 net/virtio: not in enabled drivers build config 00:03:10.002 net/vmxnet3: not in enabled drivers build config 00:03:10.002 raw/*: missing internal dependency, "rawdev" 00:03:10.002 crypto/armv8: not in enabled drivers build config 00:03:10.002 crypto/bcmfs: not in enabled drivers build config 00:03:10.002 crypto/caam_jr: not in enabled drivers build config 00:03:10.002 crypto/ccp: not in enabled drivers build config 00:03:10.002 crypto/cnxk: not in enabled drivers build config 00:03:10.002 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.002 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.002 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.002 crypto/mlx5: not in enabled drivers build config 00:03:10.002 crypto/mvsam: not in enabled drivers build config 00:03:10.002 crypto/nitrox: not in enabled drivers build config 00:03:10.002 crypto/null: not in enabled drivers build config 00:03:10.002 crypto/octeontx: not in enabled drivers build config 00:03:10.002 crypto/openssl: not in enabled drivers build config 00:03:10.002 crypto/scheduler: not in enabled drivers build config 00:03:10.002 crypto/uadk: not in enabled drivers build config 00:03:10.002 crypto/virtio: not in enabled drivers build config 00:03:10.002 compress/isal: not in enabled drivers build config 00:03:10.002 compress/mlx5: not in enabled drivers build config 00:03:10.002 compress/nitrox: not in enabled drivers build config 00:03:10.002 compress/octeontx: not in enabled drivers build config 00:03:10.002 compress/zlib: not in enabled drivers build config 00:03:10.002 regex/*: missing internal dependency, "regexdev" 00:03:10.002 ml/*: missing internal dependency, "mldev" 00:03:10.002 vdpa/ifc: not in enabled drivers build config 00:03:10.002 vdpa/mlx5: not in enabled drivers build config 00:03:10.002 vdpa/nfp: not in enabled drivers build config 00:03:10.002 vdpa/sfc: not in enabled drivers build config 00:03:10.002 event/*: missing internal dependency, "eventdev" 00:03:10.002 baseband/*: missing internal dependency, "bbdev" 00:03:10.002 gpu/*: missing internal dependency, "gpudev" 00:03:10.002 00:03:10.002 00:03:10.002 Build targets in project: 85 00:03:10.002 00:03:10.002 DPDK 24.03.0 00:03:10.002 00:03:10.002 User defined options 00:03:10.002 buildtype : debug 00:03:10.002 default_library : static 00:03:10.002 libdir : lib 00:03:10.002 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:10.002 c_args : -fPIC -Werror 00:03:10.002 c_link_args : 00:03:10.002 cpu_instruction_set: native 00:03:10.003 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:10.003 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:10.003 enable_docs : false 00:03:10.003 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:10.003 enable_kmods : false 00:03:10.003 max_lcores : 128 00:03:10.003 tests : false 00:03:10.003 00:03:10.003 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.003 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:03:10.003 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:10.003 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:10.003 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:10.003 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.003 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:10.003 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:10.003 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.003 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.003 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:10.003 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.003 [11/268] Linking static target lib/librte_kvargs.a 00:03:10.003 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.003 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:10.003 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:10.003 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:10.003 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:10.003 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:10.003 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:10.003 [19/268] Linking static target lib/librte_log.a 00:03:10.003 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.003 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.003 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:10.003 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:10.003 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.003 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:10.003 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.003 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.003 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:10.003 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:10.003 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:10.003 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:10.003 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.003 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.003 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.003 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.003 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:10.003 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:10.003 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:10.003 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.003 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:10.003 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:10.003 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:10.003 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:10.003 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:10.003 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:10.003 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.003 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.003 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.003 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.003 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:10.285 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:10.285 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:10.285 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.285 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:10.285 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:10.285 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:10.285 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:10.285 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:10.285 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:10.285 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.285 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.285 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.285 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:10.285 [64/268] Linking static target lib/librte_telemetry.a 00:03:10.285 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.285 [66/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.285 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:10.285 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:10.285 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:10.285 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:10.285 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:10.285 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:10.285 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.285 [74/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:10.285 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:10.285 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:10.285 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.285 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:10.285 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:10.285 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:10.285 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.285 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:10.285 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:10.285 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:10.285 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:10.285 [86/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:10.285 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:10.285 [88/268] Linking static target lib/librte_pci.a 00:03:10.285 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:10.285 [90/268] Linking static target lib/librte_ring.a 00:03:10.285 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.285 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:10.285 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:10.285 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:10.285 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.285 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:10.285 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:10.285 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:10.285 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.285 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.285 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.285 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:10.285 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:10.285 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:10.285 [105/268] Linking static target lib/librte_eal.a 00:03:10.285 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.285 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.285 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:10.285 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:10.285 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.285 [111/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.285 [112/268] Linking static target lib/librte_rcu.a 00:03:10.285 [113/268] Linking static target lib/librte_mempool.a 00:03:10.285 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.285 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.544 [116/268] Linking target lib/librte_log.so.24.1 00:03:10.544 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.544 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.544 [119/268] Linking static target lib/librte_mbuf.a 00:03:10.544 [120/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.544 [121/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.544 [122/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:10.544 [123/268] Linking static target lib/librte_net.a 00:03:10.544 [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:10.544 [125/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.544 [126/268] Linking target lib/librte_kvargs.so.24.1 00:03:10.544 [127/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.802 [128/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.802 [129/268] Linking static target lib/librte_meter.a 00:03:10.802 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:10.802 [131/268] Linking target lib/librte_telemetry.so.24.1 00:03:10.802 [132/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:10.802 [133/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:10.802 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:10.802 [135/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:10.802 [136/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.802 [137/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:10.802 [138/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:10.802 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.802 [140/268] Linking static target lib/librte_timer.a 00:03:10.802 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.802 [142/268] Linking static target lib/librte_cmdline.a 00:03:10.802 [143/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:10.802 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.802 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:10.802 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.802 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:10.802 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.802 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:10.802 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:10.802 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.802 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.802 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:10.802 [154/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:10.802 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:10.802 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.802 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.802 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:10.802 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.802 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:10.802 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:10.802 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.802 [163/268] Linking static target lib/librte_dmadev.a 00:03:10.802 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:10.802 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:10.802 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.802 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:10.802 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:10.802 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.802 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.802 [171/268] Linking static target lib/librte_compressdev.a 00:03:10.802 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:10.802 [173/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.802 [174/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:10.802 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:10.802 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:11.060 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:11.060 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.060 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:11.060 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:11.060 [181/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:11.060 [182/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.060 [183/268] Linking static target lib/librte_hash.a 00:03:11.060 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.060 [185/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:11.060 [186/268] Linking static target lib/librte_power.a 00:03:11.060 [187/268] Linking static target lib/librte_reorder.a 00:03:11.060 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:11.060 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:11.060 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:11.060 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:11.060 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:11.060 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:11.060 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:11.060 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.060 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.060 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:11.060 [198/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.060 [199/268] Linking static target drivers/librte_bus_vdev.a 00:03:11.060 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:11.060 [201/268] Linking static target lib/librte_security.a 00:03:11.061 [202/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.061 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.061 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.061 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.061 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:11.318 [207/268] Linking static target lib/librte_cryptodev.a 00:03:11.318 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:11.318 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.318 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.318 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.318 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.318 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.318 [214/268] Linking static target drivers/librte_bus_pci.a 00:03:11.318 [215/268] Linking static target drivers/librte_mempool_ring.a 00:03:11.318 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:11.318 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:11.318 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.318 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.577 [220/268] Linking static target lib/librte_ethdev.a 00:03:11.577 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.577 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.835 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.093 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.093 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.093 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.093 [227/268] Linking static target lib/librte_vhost.a 00:03:12.093 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.093 [229/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.467 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.399 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.038 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.409 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.409 [234/268] Linking target lib/librte_eal.so.24.1 00:03:22.665 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:22.665 [236/268] Linking target lib/librte_pci.so.24.1 00:03:22.665 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:22.665 [238/268] Linking target lib/librte_ring.so.24.1 00:03:22.665 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:22.665 [240/268] Linking target lib/librte_meter.so.24.1 00:03:22.665 [241/268] Linking target lib/librte_timer.so.24.1 00:03:22.922 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:22.922 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:22.922 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:22.922 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:22.922 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:22.922 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:22.922 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:22.922 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:23.178 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.178 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.178 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.178 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.178 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.435 [255/268] Linking target lib/librte_net.so.24.1 00:03:23.435 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:23.435 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.435 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:23.435 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:23.435 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:23.692 [261/268] Linking target lib/librte_hash.so.24.1 00:03:23.692 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:23.692 [263/268] Linking target lib/librte_security.so.24.1 00:03:23.692 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:23.692 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:23.692 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:23.950 [267/268] Linking target lib/librte_power.so.24.1 00:03:23.950 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:23.950 INFO: autodetecting backend as ninja 00:03:23.950 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:03:24.880 CC lib/log/log.o 00:03:24.880 CC lib/ut_mock/mock.o 00:03:24.880 CC lib/log/log_flags.o 00:03:24.880 CC lib/log/log_deprecated.o 00:03:24.880 CC lib/ut/ut.o 00:03:24.880 LIB libspdk_ut_mock.a 00:03:24.880 LIB libspdk_log.a 00:03:24.880 LIB libspdk_ut.a 00:03:25.138 CC lib/dma/dma.o 00:03:25.138 CC lib/util/base64.o 00:03:25.138 CC lib/util/cpuset.o 00:03:25.138 CC lib/util/bit_array.o 00:03:25.138 CXX lib/trace_parser/trace.o 00:03:25.138 CC lib/util/crc16.o 00:03:25.138 CC lib/util/crc32.o 00:03:25.138 CC lib/util/crc32c.o 00:03:25.138 CC lib/util/crc64.o 00:03:25.138 CC lib/util/crc32_ieee.o 00:03:25.138 CC lib/util/file.o 00:03:25.138 CC lib/util/dif.o 00:03:25.138 CC lib/util/fd.o 00:03:25.138 CC lib/util/hexlify.o 00:03:25.138 CC lib/util/math.o 00:03:25.138 CC lib/util/iov.o 00:03:25.138 CC lib/ioat/ioat.o 00:03:25.138 CC lib/util/pipe.o 00:03:25.138 CC lib/util/string.o 00:03:25.138 CC lib/util/strerror_tls.o 00:03:25.138 CC lib/util/xor.o 00:03:25.138 CC lib/util/uuid.o 00:03:25.138 CC lib/util/fd_group.o 00:03:25.138 CC lib/util/zipf.o 00:03:25.396 LIB libspdk_dma.a 00:03:25.396 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.396 CC lib/vfio_user/host/vfio_user.o 00:03:25.396 LIB libspdk_ioat.a 00:03:25.396 LIB libspdk_vfio_user.a 00:03:25.653 LIB libspdk_util.a 00:03:25.653 LIB libspdk_trace_parser.a 00:03:25.909 CC lib/rdma_utils/rdma_utils.o 00:03:25.909 CC lib/conf/conf.o 00:03:25.909 CC lib/json/json_parse.o 00:03:25.909 CC lib/json/json_write.o 00:03:25.909 CC lib/json/json_util.o 00:03:25.909 CC lib/env_dpdk/memory.o 00:03:25.909 CC lib/env_dpdk/env.o 00:03:25.909 CC lib/rdma_provider/common.o 00:03:25.909 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.909 CC lib/env_dpdk/threads.o 00:03:25.909 CC lib/env_dpdk/pci.o 00:03:25.909 CC lib/env_dpdk/init.o 00:03:25.909 CC lib/vmd/vmd.o 00:03:25.909 CC lib/vmd/led.o 00:03:25.909 CC lib/env_dpdk/pci_ioat.o 00:03:25.910 CC lib/env_dpdk/pci_idxd.o 00:03:25.910 CC lib/env_dpdk/pci_vmd.o 00:03:25.910 CC lib/env_dpdk/pci_virtio.o 00:03:25.910 CC lib/env_dpdk/pci_event.o 00:03:25.910 CC lib/env_dpdk/sigbus_handler.o 00:03:25.910 CC lib/env_dpdk/pci_dpdk.o 00:03:25.910 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.910 CC lib/idxd/idxd.o 00:03:25.910 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.910 CC lib/idxd/idxd_user.o 00:03:25.910 CC lib/idxd/idxd_kernel.o 00:03:25.910 LIB libspdk_rdma_provider.a 00:03:25.910 LIB libspdk_conf.a 00:03:26.166 LIB libspdk_rdma_utils.a 00:03:26.166 LIB libspdk_json.a 00:03:26.166 LIB libspdk_idxd.a 00:03:26.166 LIB libspdk_vmd.a 00:03:26.423 CC lib/jsonrpc/jsonrpc_server.o 00:03:26.423 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:26.423 CC lib/jsonrpc/jsonrpc_client.o 00:03:26.423 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:26.423 LIB libspdk_jsonrpc.a 00:03:26.989 CC lib/rpc/rpc.o 00:03:26.989 LIB libspdk_env_dpdk.a 00:03:26.989 LIB libspdk_rpc.a 00:03:27.247 CC lib/notify/notify.o 00:03:27.247 CC lib/notify/notify_rpc.o 00:03:27.247 CC lib/trace/trace.o 00:03:27.247 CC lib/trace/trace_flags.o 00:03:27.247 CC lib/trace/trace_rpc.o 00:03:27.247 CC lib/keyring/keyring.o 00:03:27.247 CC lib/keyring/keyring_rpc.o 00:03:27.504 LIB libspdk_notify.a 00:03:27.504 LIB libspdk_trace.a 00:03:27.504 LIB libspdk_keyring.a 00:03:27.875 CC lib/thread/thread.o 00:03:27.875 CC lib/thread/iobuf.o 00:03:27.875 CC lib/sock/sock.o 00:03:27.875 CC lib/sock/sock_rpc.o 00:03:28.134 LIB libspdk_sock.a 00:03:28.391 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.391 CC lib/nvme/nvme_ctrlr.o 00:03:28.391 CC lib/nvme/nvme_ns_cmd.o 00:03:28.391 CC lib/nvme/nvme_ns.o 00:03:28.391 CC lib/nvme/nvme_fabric.o 00:03:28.391 CC lib/nvme/nvme_pcie.o 00:03:28.391 CC lib/nvme/nvme_qpair.o 00:03:28.391 CC lib/nvme/nvme_pcie_common.o 00:03:28.391 CC lib/nvme/nvme.o 00:03:28.391 CC lib/nvme/nvme_transport.o 00:03:28.391 CC lib/nvme/nvme_quirks.o 00:03:28.391 CC lib/nvme/nvme_discovery.o 00:03:28.391 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.391 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.391 CC lib/nvme/nvme_opal.o 00:03:28.391 CC lib/nvme/nvme_tcp.o 00:03:28.391 CC lib/nvme/nvme_io_msg.o 00:03:28.391 CC lib/nvme/nvme_poll_group.o 00:03:28.391 CC lib/nvme/nvme_zns.o 00:03:28.391 CC lib/nvme/nvme_stubs.o 00:03:28.391 CC lib/nvme/nvme_auth.o 00:03:28.391 CC lib/nvme/nvme_cuse.o 00:03:28.391 CC lib/nvme/nvme_vfio_user.o 00:03:28.391 CC lib/nvme/nvme_rdma.o 00:03:28.649 LIB libspdk_thread.a 00:03:28.907 CC lib/init/json_config.o 00:03:28.907 CC lib/init/subsystem.o 00:03:28.907 CC lib/accel/accel.o 00:03:28.907 CC lib/init/subsystem_rpc.o 00:03:28.907 CC lib/accel/accel_rpc.o 00:03:28.907 CC lib/init/rpc.o 00:03:28.907 CC lib/vfu_tgt/tgt_endpoint.o 00:03:28.907 CC lib/accel/accel_sw.o 00:03:28.907 CC lib/vfu_tgt/tgt_rpc.o 00:03:28.907 CC lib/virtio/virtio.o 00:03:28.907 CC lib/virtio/virtio_vhost_user.o 00:03:28.907 CC lib/virtio/virtio_vfio_user.o 00:03:28.907 CC lib/virtio/virtio_pci.o 00:03:28.907 CC lib/blob/blobstore.o 00:03:28.907 CC lib/blob/blob_bs_dev.o 00:03:28.907 CC lib/blob/request.o 00:03:28.907 CC lib/blob/zeroes.o 00:03:29.165 LIB libspdk_init.a 00:03:29.165 LIB libspdk_vfu_tgt.a 00:03:29.165 LIB libspdk_virtio.a 00:03:29.423 CC lib/event/reactor.o 00:03:29.423 CC lib/event/app.o 00:03:29.423 CC lib/event/scheduler_static.o 00:03:29.423 CC lib/event/log_rpc.o 00:03:29.423 CC lib/event/app_rpc.o 00:03:29.682 LIB libspdk_event.a 00:03:29.682 LIB libspdk_accel.a 00:03:29.682 LIB libspdk_nvme.a 00:03:29.940 CC lib/bdev/bdev.o 00:03:29.940 CC lib/bdev/bdev_rpc.o 00:03:29.940 CC lib/bdev/scsi_nvme.o 00:03:29.940 CC lib/bdev/bdev_zone.o 00:03:29.940 CC lib/bdev/part.o 00:03:30.876 LIB libspdk_blob.a 00:03:31.135 CC lib/blobfs/blobfs.o 00:03:31.135 CC lib/blobfs/tree.o 00:03:31.135 CC lib/lvol/lvol.o 00:03:31.703 LIB libspdk_lvol.a 00:03:31.703 LIB libspdk_blobfs.a 00:03:31.703 LIB libspdk_bdev.a 00:03:31.966 CC lib/ftl/ftl_debug.o 00:03:31.966 CC lib/ftl/ftl_core.o 00:03:31.966 CC lib/ftl/ftl_init.o 00:03:31.966 CC lib/ftl/ftl_layout.o 00:03:31.966 CC lib/ftl/ftl_l2p.o 00:03:31.966 CC lib/ftl/ftl_io.o 00:03:31.966 CC lib/ftl/ftl_sb.o 00:03:31.966 CC lib/ftl/ftl_l2p_flat.o 00:03:31.966 CC lib/ftl/ftl_nv_cache.o 00:03:31.966 CC lib/ftl/ftl_band.o 00:03:31.966 CC lib/ftl/ftl_band_ops.o 00:03:31.966 CC lib/ftl/ftl_writer.o 00:03:31.966 CC lib/ftl/ftl_rq.o 00:03:31.966 CC lib/ftl/ftl_reloc.o 00:03:31.966 CC lib/ftl/ftl_l2p_cache.o 00:03:31.966 CC lib/ftl/ftl_p2l.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.966 CC lib/scsi/dev.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.966 CC lib/scsi/lun.o 00:03:31.966 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.966 CC lib/scsi/port.o 00:03:31.966 CC lib/ftl/utils/ftl_conf.o 00:03:31.966 CC lib/scsi/scsi.o 00:03:31.966 CC lib/ftl/utils/ftl_md.o 00:03:31.966 CC lib/scsi/scsi_bdev.o 00:03:31.966 CC lib/nbd/nbd.o 00:03:31.966 CC lib/nvmf/ctrlr.o 00:03:31.966 CC lib/scsi/scsi_rpc.o 00:03:31.967 CC lib/nbd/nbd_rpc.o 00:03:31.967 CC lib/ftl/utils/ftl_property.o 00:03:31.967 CC lib/scsi/scsi_pr.o 00:03:31.967 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.967 CC lib/scsi/task.o 00:03:31.967 CC lib/nvmf/ctrlr_discovery.o 00:03:31.967 CC lib/ftl/utils/ftl_mempool.o 00:03:31.967 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.967 CC lib/nvmf/ctrlr_bdev.o 00:03:31.967 CC lib/nvmf/subsystem.o 00:03:31.967 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.967 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.967 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.967 CC lib/nvmf/nvmf_rpc.o 00:03:31.967 CC lib/nvmf/nvmf.o 00:03:31.967 CC lib/nvmf/transport.o 00:03:31.967 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.967 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:31.967 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.967 CC lib/nvmf/tcp.o 00:03:31.967 CC lib/nvmf/stubs.o 00:03:31.967 CC lib/nvmf/mdns_server.o 00:03:31.967 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.967 CC lib/nvmf/vfio_user.o 00:03:31.967 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.967 CC lib/nvmf/rdma.o 00:03:31.967 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.967 CC lib/nvmf/auth.o 00:03:31.967 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.967 CC lib/ublk/ublk.o 00:03:31.967 CC lib/ftl/base/ftl_base_dev.o 00:03:31.967 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.967 CC lib/ublk/ublk_rpc.o 00:03:32.225 CC lib/ftl/ftl_trace.o 00:03:32.483 LIB libspdk_scsi.a 00:03:32.483 LIB libspdk_nbd.a 00:03:32.483 LIB libspdk_ublk.a 00:03:32.742 LIB libspdk_ftl.a 00:03:32.742 CC lib/iscsi/iscsi.o 00:03:32.742 CC lib/iscsi/conn.o 00:03:32.742 CC lib/iscsi/init_grp.o 00:03:32.742 CC lib/iscsi/md5.o 00:03:32.742 CC lib/iscsi/param.o 00:03:32.742 CC lib/iscsi/portal_grp.o 00:03:32.742 CC lib/iscsi/tgt_node.o 00:03:32.742 CC lib/vhost/vhost.o 00:03:32.742 CC lib/iscsi/iscsi_subsystem.o 00:03:32.742 CC lib/vhost/vhost_rpc.o 00:03:32.742 CC lib/iscsi/iscsi_rpc.o 00:03:32.742 CC lib/vhost/vhost_scsi.o 00:03:32.742 CC lib/iscsi/task.o 00:03:32.742 CC lib/vhost/vhost_blk.o 00:03:32.742 CC lib/vhost/rte_vhost_user.o 00:03:33.307 LIB libspdk_nvmf.a 00:03:33.307 LIB libspdk_vhost.a 00:03:33.566 LIB libspdk_iscsi.a 00:03:34.133 CC module/vfu_device/vfu_virtio_scsi.o 00:03:34.133 CC module/vfu_device/vfu_virtio.o 00:03:34.133 CC module/vfu_device/vfu_virtio_blk.o 00:03:34.133 CC module/vfu_device/vfu_virtio_rpc.o 00:03:34.133 CC module/env_dpdk/env_dpdk_rpc.o 00:03:34.133 CC module/keyring/linux/keyring.o 00:03:34.133 CC module/keyring/linux/keyring_rpc.o 00:03:34.133 CC module/sock/posix/posix.o 00:03:34.133 CC module/keyring/file/keyring.o 00:03:34.133 CC module/keyring/file/keyring_rpc.o 00:03:34.133 CC module/accel/error/accel_error_rpc.o 00:03:34.133 CC module/accel/error/accel_error.o 00:03:34.133 CC module/blob/bdev/blob_bdev.o 00:03:34.133 LIB libspdk_env_dpdk_rpc.a 00:03:34.133 CC module/accel/dsa/accel_dsa.o 00:03:34.133 CC module/accel/dsa/accel_dsa_rpc.o 00:03:34.133 CC module/accel/ioat/accel_ioat.o 00:03:34.133 CC module/accel/ioat/accel_ioat_rpc.o 00:03:34.133 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:34.133 CC module/accel/iaa/accel_iaa.o 00:03:34.133 CC module/accel/iaa/accel_iaa_rpc.o 00:03:34.133 CC module/scheduler/gscheduler/gscheduler.o 00:03:34.133 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:34.133 LIB libspdk_keyring_linux.a 00:03:34.133 LIB libspdk_keyring_file.a 00:03:34.133 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.392 LIB libspdk_accel_error.a 00:03:34.392 LIB libspdk_scheduler_gscheduler.a 00:03:34.392 LIB libspdk_accel_ioat.a 00:03:34.392 LIB libspdk_scheduler_dynamic.a 00:03:34.392 LIB libspdk_accel_iaa.a 00:03:34.392 LIB libspdk_blob_bdev.a 00:03:34.392 LIB libspdk_accel_dsa.a 00:03:34.392 LIB libspdk_vfu_device.a 00:03:34.650 LIB libspdk_sock_posix.a 00:03:34.650 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.650 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.650 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.650 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.650 CC module/bdev/error/vbdev_error.o 00:03:34.650 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.650 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.650 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.650 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.650 CC module/bdev/raid/bdev_raid.o 00:03:34.650 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.650 CC module/bdev/split/vbdev_split.o 00:03:34.650 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.650 CC module/bdev/raid/raid0.o 00:03:34.650 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.650 CC module/bdev/raid/raid1.o 00:03:34.650 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.650 CC module/bdev/raid/concat.o 00:03:34.650 CC module/bdev/gpt/gpt.o 00:03:34.650 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.650 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.650 CC module/bdev/nvme/nvme_rpc.o 00:03:34.650 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.650 CC module/bdev/nvme/bdev_nvme.o 00:03:34.650 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.650 CC module/bdev/nvme/vbdev_opal.o 00:03:34.650 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.650 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.650 CC module/bdev/aio/bdev_aio.o 00:03:34.650 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.650 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.650 CC module/bdev/null/bdev_null.o 00:03:34.650 CC module/bdev/null/bdev_null_rpc.o 00:03:34.650 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.650 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.650 CC module/bdev/delay/vbdev_delay.o 00:03:34.650 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.650 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.650 CC module/bdev/malloc/bdev_malloc.o 00:03:34.650 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.650 CC module/bdev/ftl/bdev_ftl.o 00:03:34.650 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.908 LIB libspdk_blobfs_bdev.a 00:03:34.908 LIB libspdk_bdev_split.a 00:03:34.908 LIB libspdk_bdev_error.a 00:03:34.908 LIB libspdk_bdev_gpt.a 00:03:34.908 LIB libspdk_bdev_null.a 00:03:34.908 LIB libspdk_bdev_aio.a 00:03:34.908 LIB libspdk_bdev_zone_block.a 00:03:34.908 LIB libspdk_bdev_ftl.a 00:03:34.908 LIB libspdk_bdev_delay.a 00:03:35.166 LIB libspdk_bdev_passthru.a 00:03:35.166 LIB libspdk_bdev_iscsi.a 00:03:35.166 LIB libspdk_bdev_lvol.a 00:03:35.166 LIB libspdk_bdev_malloc.a 00:03:35.166 LIB libspdk_bdev_virtio.a 00:03:35.425 LIB libspdk_bdev_raid.a 00:03:35.991 LIB libspdk_bdev_nvme.a 00:03:36.557 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.557 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.557 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.557 CC module/event/subsystems/vmd/vmd.o 00:03:36.557 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.557 CC module/event/subsystems/keyring/keyring.o 00:03:36.557 CC module/event/subsystems/sock/sock.o 00:03:36.816 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.816 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:36.816 LIB libspdk_event_vmd.a 00:03:36.816 LIB libspdk_event_iobuf.a 00:03:36.816 LIB libspdk_event_vhost_blk.a 00:03:36.816 LIB libspdk_event_keyring.a 00:03:36.816 LIB libspdk_event_sock.a 00:03:36.816 LIB libspdk_event_vfu_tgt.a 00:03:36.816 LIB libspdk_event_scheduler.a 00:03:37.074 CC module/event/subsystems/accel/accel.o 00:03:37.074 LIB libspdk_event_accel.a 00:03:37.641 CC module/event/subsystems/bdev/bdev.o 00:03:37.641 LIB libspdk_event_bdev.a 00:03:37.900 CC module/event/subsystems/nbd/nbd.o 00:03:37.900 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.900 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.900 CC module/event/subsystems/scsi/scsi.o 00:03:37.900 CC module/event/subsystems/ublk/ublk.o 00:03:37.900 LIB libspdk_event_nbd.a 00:03:37.900 LIB libspdk_event_scsi.a 00:03:38.158 LIB libspdk_event_ublk.a 00:03:38.158 LIB libspdk_event_nvmf.a 00:03:38.416 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.416 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.416 LIB libspdk_event_vhost_scsi.a 00:03:38.416 LIB libspdk_event_iscsi.a 00:03:38.675 CC app/trace_record/trace_record.o 00:03:38.675 CC app/spdk_nvme_perf/perf.o 00:03:38.675 CC app/spdk_top/spdk_top.o 00:03:38.675 CC app/spdk_nvme_identify/identify.o 00:03:38.675 CXX app/trace/trace.o 00:03:38.675 CC test/rpc_client/rpc_client_test.o 00:03:38.675 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.675 CC app/spdk_lspci/spdk_lspci.o 00:03:38.675 TEST_HEADER include/spdk/assert.h 00:03:38.675 TEST_HEADER include/spdk/accel.h 00:03:38.675 TEST_HEADER include/spdk/accel_module.h 00:03:38.675 TEST_HEADER include/spdk/bdev.h 00:03:38.675 TEST_HEADER include/spdk/barrier.h 00:03:38.675 TEST_HEADER include/spdk/base64.h 00:03:38.675 TEST_HEADER include/spdk/bdev_module.h 00:03:38.675 TEST_HEADER include/spdk/bit_array.h 00:03:38.675 CC app/nvmf_tgt/nvmf_main.o 00:03:38.675 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.675 TEST_HEADER include/spdk/bit_pool.h 00:03:38.675 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.675 TEST_HEADER include/spdk/blobfs.h 00:03:38.675 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.675 TEST_HEADER include/spdk/config.h 00:03:38.675 TEST_HEADER include/spdk/blob.h 00:03:38.675 TEST_HEADER include/spdk/conf.h 00:03:38.675 TEST_HEADER include/spdk/cpuset.h 00:03:38.675 TEST_HEADER include/spdk/crc16.h 00:03:38.675 TEST_HEADER include/spdk/crc64.h 00:03:38.675 TEST_HEADER include/spdk/crc32.h 00:03:38.675 TEST_HEADER include/spdk/dif.h 00:03:38.675 TEST_HEADER include/spdk/dma.h 00:03:38.675 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:38.675 TEST_HEADER include/spdk/endian.h 00:03:38.675 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.675 CC app/spdk_dd/spdk_dd.o 00:03:38.675 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.675 TEST_HEADER include/spdk/event.h 00:03:38.676 TEST_HEADER include/spdk/fd.h 00:03:38.676 TEST_HEADER include/spdk/fd_group.h 00:03:38.676 TEST_HEADER include/spdk/env.h 00:03:38.676 TEST_HEADER include/spdk/file.h 00:03:38.676 TEST_HEADER include/spdk/ftl.h 00:03:38.676 TEST_HEADER include/spdk/hexlify.h 00:03:38.676 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.935 TEST_HEADER include/spdk/idxd.h 00:03:38.935 TEST_HEADER include/spdk/histogram_data.h 00:03:38.935 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.935 TEST_HEADER include/spdk/init.h 00:03:38.935 TEST_HEADER include/spdk/ioat.h 00:03:38.935 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.935 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.935 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.935 TEST_HEADER include/spdk/json.h 00:03:38.935 TEST_HEADER include/spdk/keyring_module.h 00:03:38.935 TEST_HEADER include/spdk/keyring.h 00:03:38.935 TEST_HEADER include/spdk/log.h 00:03:38.935 TEST_HEADER include/spdk/lvol.h 00:03:38.935 TEST_HEADER include/spdk/likely.h 00:03:38.935 TEST_HEADER include/spdk/memory.h 00:03:38.935 TEST_HEADER include/spdk/mmio.h 00:03:38.935 TEST_HEADER include/spdk/nbd.h 00:03:38.935 TEST_HEADER include/spdk/notify.h 00:03:38.935 TEST_HEADER include/spdk/nvme.h 00:03:38.935 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.935 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.935 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.935 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.935 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.935 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.935 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.935 TEST_HEADER include/spdk/nvmf.h 00:03:38.935 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.935 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.935 TEST_HEADER include/spdk/opal.h 00:03:38.935 TEST_HEADER include/spdk/opal_spec.h 00:03:38.935 TEST_HEADER include/spdk/pci_ids.h 00:03:38.935 TEST_HEADER include/spdk/pipe.h 00:03:38.935 TEST_HEADER include/spdk/queue.h 00:03:38.935 TEST_HEADER include/spdk/reduce.h 00:03:38.935 TEST_HEADER include/spdk/rpc.h 00:03:38.935 TEST_HEADER include/spdk/scheduler.h 00:03:38.935 TEST_HEADER include/spdk/scsi.h 00:03:38.935 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.935 TEST_HEADER include/spdk/sock.h 00:03:38.935 TEST_HEADER include/spdk/stdinc.h 00:03:38.935 TEST_HEADER include/spdk/string.h 00:03:38.936 TEST_HEADER include/spdk/thread.h 00:03:38.936 TEST_HEADER include/spdk/trace.h 00:03:38.936 TEST_HEADER include/spdk/trace_parser.h 00:03:38.936 TEST_HEADER include/spdk/tree.h 00:03:38.936 TEST_HEADER include/spdk/ublk.h 00:03:38.936 TEST_HEADER include/spdk/util.h 00:03:38.936 TEST_HEADER include/spdk/uuid.h 00:03:38.936 TEST_HEADER include/spdk/version.h 00:03:38.936 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.936 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.936 TEST_HEADER include/spdk/vhost.h 00:03:38.936 TEST_HEADER include/spdk/vmd.h 00:03:38.936 CC app/spdk_tgt/spdk_tgt.o 00:03:38.936 TEST_HEADER include/spdk/xor.h 00:03:38.936 TEST_HEADER include/spdk/zipf.h 00:03:38.936 CXX test/cpp_headers/accel.o 00:03:38.936 CXX test/cpp_headers/accel_module.o 00:03:38.936 CXX test/cpp_headers/assert.o 00:03:38.936 CXX test/cpp_headers/barrier.o 00:03:38.936 CXX test/cpp_headers/base64.o 00:03:38.936 CXX test/cpp_headers/bdev.o 00:03:38.936 CXX test/cpp_headers/bdev_module.o 00:03:38.936 CXX test/cpp_headers/bdev_zone.o 00:03:38.936 CXX test/cpp_headers/bit_array.o 00:03:38.936 CC examples/ioat/verify/verify.o 00:03:38.936 CXX test/cpp_headers/bit_pool.o 00:03:38.936 CXX test/cpp_headers/blobfs_bdev.o 00:03:38.936 CXX test/cpp_headers/blob_bdev.o 00:03:38.936 CXX test/cpp_headers/blobfs.o 00:03:38.936 CXX test/cpp_headers/blob.o 00:03:38.936 CXX test/cpp_headers/conf.o 00:03:38.936 CXX test/cpp_headers/config.o 00:03:38.936 CC examples/util/zipf/zipf.o 00:03:38.936 CXX test/cpp_headers/cpuset.o 00:03:38.936 CXX test/cpp_headers/crc16.o 00:03:38.936 CXX test/cpp_headers/crc32.o 00:03:38.936 CXX test/cpp_headers/crc64.o 00:03:38.936 CXX test/cpp_headers/dif.o 00:03:38.936 CXX test/cpp_headers/dma.o 00:03:38.936 CXX test/cpp_headers/endian.o 00:03:38.936 CXX test/cpp_headers/env_dpdk.o 00:03:38.936 CXX test/cpp_headers/env.o 00:03:38.936 CXX test/cpp_headers/event.o 00:03:38.936 CXX test/cpp_headers/fd.o 00:03:38.936 CXX test/cpp_headers/fd_group.o 00:03:38.936 CC examples/ioat/perf/perf.o 00:03:38.936 CXX test/cpp_headers/file.o 00:03:38.936 CXX test/cpp_headers/ftl.o 00:03:38.936 CC test/app/histogram_perf/histogram_perf.o 00:03:38.936 CXX test/cpp_headers/gpt_spec.o 00:03:38.936 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:38.936 CXX test/cpp_headers/hexlify.o 00:03:38.936 CXX test/cpp_headers/histogram_data.o 00:03:38.936 CXX test/cpp_headers/idxd.o 00:03:38.936 CC test/app/jsoncat/jsoncat.o 00:03:38.936 CXX test/cpp_headers/idxd_spec.o 00:03:38.936 CXX test/cpp_headers/init.o 00:03:38.936 CXX test/cpp_headers/ioat.o 00:03:38.936 CC app/fio/nvme/fio_plugin.o 00:03:38.936 CC test/app/stub/stub.o 00:03:38.936 CC test/env/pci/pci_ut.o 00:03:38.936 CC test/env/memory/memory_ut.o 00:03:38.936 CC test/env/vtophys/vtophys.o 00:03:38.936 CC test/thread/poller_perf/poller_perf.o 00:03:38.936 CC test/thread/lock/spdk_lock.o 00:03:38.936 CXX test/cpp_headers/ioat_spec.o 00:03:38.936 LINK spdk_lspci 00:03:38.936 CC app/fio/bdev/fio_plugin.o 00:03:38.936 CC test/dma/test_dma/test_dma.o 00:03:38.936 CC test/app/bdev_svc/bdev_svc.o 00:03:38.936 LINK rpc_client_test 00:03:38.936 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.936 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.936 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.936 LINK spdk_nvme_discover 00:03:38.936 LINK spdk_trace_record 00:03:38.936 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.936 LINK nvmf_tgt 00:03:38.936 LINK interrupt_tgt 00:03:38.936 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:38.936 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:38.936 LINK jsoncat 00:03:38.936 CXX test/cpp_headers/iscsi_spec.o 00:03:38.936 LINK vtophys 00:03:38.936 CXX test/cpp_headers/json.o 00:03:38.936 LINK histogram_perf 00:03:39.201 CXX test/cpp_headers/jsonrpc.o 00:03:39.201 LINK zipf 00:03:39.201 CXX test/cpp_headers/keyring.o 00:03:39.201 CXX test/cpp_headers/keyring_module.o 00:03:39.201 CXX test/cpp_headers/likely.o 00:03:39.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.201 LINK env_dpdk_post_init 00:03:39.201 LINK poller_perf 00:03:39.201 CXX test/cpp_headers/log.o 00:03:39.201 LINK iscsi_tgt 00:03:39.201 CXX test/cpp_headers/lvol.o 00:03:39.201 CXX test/cpp_headers/memory.o 00:03:39.201 CXX test/cpp_headers/mmio.o 00:03:39.201 CXX test/cpp_headers/nbd.o 00:03:39.201 CXX test/cpp_headers/notify.o 00:03:39.201 CXX test/cpp_headers/nvme.o 00:03:39.201 CXX test/cpp_headers/nvme_intel.o 00:03:39.201 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.201 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.201 CXX test/cpp_headers/nvme_spec.o 00:03:39.201 CXX test/cpp_headers/nvme_zns.o 00:03:39.201 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.201 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.201 CXX test/cpp_headers/nvmf.o 00:03:39.201 CXX test/cpp_headers/nvmf_spec.o 00:03:39.201 CXX test/cpp_headers/nvmf_transport.o 00:03:39.201 CXX test/cpp_headers/opal.o 00:03:39.201 CXX test/cpp_headers/opal_spec.o 00:03:39.201 CXX test/cpp_headers/pci_ids.o 00:03:39.201 CXX test/cpp_headers/pipe.o 00:03:39.201 CXX test/cpp_headers/queue.o 00:03:39.201 CXX test/cpp_headers/reduce.o 00:03:39.201 LINK stub 00:03:39.201 CXX test/cpp_headers/rpc.o 00:03:39.201 CXX test/cpp_headers/scheduler.o 00:03:39.201 CXX test/cpp_headers/scsi.o 00:03:39.201 CXX test/cpp_headers/scsi_spec.o 00:03:39.201 CXX test/cpp_headers/sock.o 00:03:39.201 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:39.201 struct spdk_nvme_fdp_ruhs ruhs; 00:03:39.201 ^ 00:03:39.201 LINK verify 00:03:39.201 CXX test/cpp_headers/stdinc.o 00:03:39.201 LINK ioat_perf 00:03:39.201 CXX test/cpp_headers/string.o 00:03:39.201 CXX test/cpp_headers/thread.o 00:03:39.201 LINK spdk_tgt 00:03:39.201 CXX test/cpp_headers/trace.o 00:03:39.201 CXX test/cpp_headers/trace_parser.o 00:03:39.201 LINK bdev_svc 00:03:39.201 CXX test/cpp_headers/tree.o 00:03:39.201 CXX test/cpp_headers/ublk.o 00:03:39.201 LINK spdk_trace 00:03:39.201 CXX test/cpp_headers/util.o 00:03:39.201 CXX test/cpp_headers/uuid.o 00:03:39.201 CXX test/cpp_headers/version.o 00:03:39.201 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.201 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.201 CXX test/cpp_headers/vhost.o 00:03:39.201 CXX test/cpp_headers/vmd.o 00:03:39.201 CXX test/cpp_headers/xor.o 00:03:39.201 CXX test/cpp_headers/zipf.o 00:03:39.460 LINK test_dma 00:03:39.460 LINK pci_ut 00:03:39.460 LINK spdk_dd 00:03:39.460 LINK llvm_vfio_fuzz 00:03:39.460 LINK nvme_fuzz 00:03:39.460 LINK spdk_bdev 00:03:39.460 1 warning generated. 00:03:39.460 LINK spdk_nvme_perf 00:03:39.460 LINK vhost_fuzz 00:03:39.717 LINK spdk_nvme 00:03:39.717 LINK spdk_nvme_identify 00:03:39.717 LINK mem_callbacks 00:03:39.717 CC examples/idxd/perf/perf.o 00:03:39.717 CC examples/vmd/led/led.o 00:03:39.717 CC examples/sock/hello_world/hello_sock.o 00:03:39.717 CC examples/vmd/lsvmd/lsvmd.o 00:03:39.717 LINK spdk_top 00:03:39.717 LINK llvm_nvme_fuzz 00:03:39.717 CC examples/thread/thread/thread_ex.o 00:03:39.975 CC app/vhost/vhost.o 00:03:39.975 LINK led 00:03:39.975 LINK lsvmd 00:03:39.975 LINK hello_sock 00:03:39.975 LINK idxd_perf 00:03:39.975 LINK thread 00:03:39.975 LINK memory_ut 00:03:39.975 LINK vhost 00:03:40.231 LINK spdk_lock 00:03:40.231 LINK iscsi_fuzz 00:03:40.797 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.797 CC examples/nvme/abort/abort.o 00:03:40.797 CC examples/nvme/arbitration/arbitration.o 00:03:40.797 CC examples/nvme/hello_world/hello_world.o 00:03:40.797 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.797 CC examples/nvme/hotplug/hotplug.o 00:03:40.797 CC examples/nvme/reconnect/reconnect.o 00:03:40.797 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:40.797 LINK pmr_persistence 00:03:40.797 LINK cmb_copy 00:03:40.797 LINK hello_world 00:03:40.797 LINK hotplug 00:03:40.797 CC test/event/event_perf/event_perf.o 00:03:40.797 CC test/event/reactor/reactor.o 00:03:40.797 CC test/event/reactor_perf/reactor_perf.o 00:03:40.797 LINK abort 00:03:40.797 LINK arbitration 00:03:40.797 CC test/event/app_repeat/app_repeat.o 00:03:40.797 LINK reconnect 00:03:40.797 CC test/event/scheduler/scheduler.o 00:03:41.056 LINK nvme_manage 00:03:41.056 LINK event_perf 00:03:41.056 LINK reactor 00:03:41.056 LINK reactor_perf 00:03:41.056 LINK app_repeat 00:03:41.056 LINK scheduler 00:03:41.313 CC test/nvme/connect_stress/connect_stress.o 00:03:41.313 CC test/nvme/compliance/nvme_compliance.o 00:03:41.313 CC test/nvme/err_injection/err_injection.o 00:03:41.313 CC test/nvme/reserve/reserve.o 00:03:41.313 CC test/nvme/reset/reset.o 00:03:41.313 CC test/nvme/overhead/overhead.o 00:03:41.313 CC test/nvme/boot_partition/boot_partition.o 00:03:41.313 CC test/nvme/aer/aer.o 00:03:41.313 CC test/nvme/simple_copy/simple_copy.o 00:03:41.313 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.313 CC test/nvme/cuse/cuse.o 00:03:41.313 CC test/nvme/sgl/sgl.o 00:03:41.313 CC test/nvme/e2edp/nvme_dp.o 00:03:41.313 CC test/nvme/startup/startup.o 00:03:41.313 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.313 CC test/nvme/fdp/fdp.o 00:03:41.313 CC test/accel/dif/dif.o 00:03:41.313 CC test/blobfs/mkfs/mkfs.o 00:03:41.313 CC test/lvol/esnap/esnap.o 00:03:41.571 LINK boot_partition 00:03:41.571 LINK connect_stress 00:03:41.571 LINK err_injection 00:03:41.571 LINK fused_ordering 00:03:41.571 LINK reserve 00:03:41.571 LINK simple_copy 00:03:41.571 LINK reset 00:03:41.571 LINK nvme_dp 00:03:41.571 LINK overhead 00:03:41.571 LINK sgl 00:03:41.571 LINK aer 00:03:41.571 LINK startup 00:03:41.571 LINK doorbell_aers 00:03:41.571 LINK mkfs 00:03:41.571 LINK fdp 00:03:41.571 LINK dif 00:03:41.829 LINK nvme_compliance 00:03:41.829 CC examples/accel/perf/accel_perf.o 00:03:41.829 CC examples/blob/hello_world/hello_blob.o 00:03:41.829 CC examples/blob/cli/blobcli.o 00:03:42.087 LINK hello_blob 00:03:42.087 LINK accel_perf 00:03:42.345 LINK cuse 00:03:42.345 LINK blobcli 00:03:42.912 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.912 CC examples/bdev/hello_world/hello_bdev.o 00:03:43.169 LINK hello_bdev 00:03:43.426 LINK bdevperf 00:03:43.426 CC test/bdev/bdevio/bdevio.o 00:03:43.684 LINK bdevio 00:03:45.058 LINK esnap 00:03:45.058 CC examples/nvmf/nvmf/nvmf.o 00:03:45.058 LINK nvmf 00:03:46.433 00:03:46.433 real 0m45.980s 00:03:46.433 user 6m10.482s 00:03:46.433 sys 2m21.378s 00:03:46.433 12:20:41 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:46.433 12:20:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.433 ************************************ 00:03:46.433 END TEST make 00:03:46.433 ************************************ 00:03:46.433 12:20:41 -- common/autotest_common.sh@1142 -- $ return 0 00:03:46.433 12:20:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.433 12:20:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.433 12:20:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.433 12:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.433 12:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.433 12:20:41 -- pm/common@44 -- $ pid=4035635 00:03:46.433 12:20:41 -- pm/common@50 -- $ kill -TERM 4035635 00:03:46.433 12:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.433 12:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.433 12:20:41 -- pm/common@44 -- $ pid=4035637 00:03:46.433 12:20:41 -- pm/common@50 -- $ kill -TERM 4035637 00:03:46.434 12:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.434 12:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:46.434 12:20:41 -- pm/common@44 -- $ pid=4035639 00:03:46.434 12:20:41 -- pm/common@50 -- $ kill -TERM 4035639 00:03:46.434 12:20:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.434 12:20:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:46.434 12:20:41 -- pm/common@44 -- $ pid=4035662 00:03:46.434 12:20:41 -- pm/common@50 -- $ sudo -E kill -TERM 4035662 00:03:46.693 12:20:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.693 12:20:41 -- nvmf/common.sh@7 -- # uname -s 00:03:46.693 12:20:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.693 12:20:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.693 12:20:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.693 12:20:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.693 12:20:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.693 12:20:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.693 12:20:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.693 12:20:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.693 12:20:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.693 12:20:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.693 12:20:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:03:46.693 12:20:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:03:46.693 12:20:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.693 12:20:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.693 12:20:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.693 12:20:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.693 12:20:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:46.693 12:20:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.693 12:20:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.693 12:20:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.693 12:20:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.693 12:20:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.693 12:20:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.693 12:20:41 -- paths/export.sh@5 -- # export PATH 00:03:46.693 12:20:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.693 12:20:41 -- nvmf/common.sh@47 -- # : 0 00:03:46.693 12:20:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:46.693 12:20:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:46.693 12:20:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.693 12:20:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.693 12:20:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.693 12:20:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:46.693 12:20:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:46.693 12:20:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:46.693 12:20:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.693 12:20:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.693 12:20:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.693 12:20:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.693 12:20:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:46.693 12:20:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.693 12:20:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:46.693 12:20:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.693 12:20:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.693 12:20:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.693 12:20:41 -- spdk/autotest.sh@48 -- # udevadm_pid=4093217 00:03:46.693 12:20:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.693 12:20:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.693 12:20:41 -- pm/common@17 -- # local monitor 00:03:46.693 12:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.693 12:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.693 12:20:41 -- pm/common@21 -- # date +%s 00:03:46.693 12:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.693 12:20:41 -- pm/common@21 -- # date +%s 00:03:46.693 12:20:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.693 12:20:41 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721038841 00:03:46.693 12:20:41 -- pm/common@25 -- # sleep 1 00:03:46.693 12:20:41 -- pm/common@21 -- # date +%s 00:03:46.693 12:20:41 -- pm/common@21 -- # date +%s 00:03:46.693 12:20:41 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721038841 00:03:46.693 12:20:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721038841 00:03:46.693 12:20:41 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721038841 00:03:46.693 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721038841_collect-vmstat.pm.log 00:03:46.693 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721038841_collect-cpu-temp.pm.log 00:03:46.693 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721038841_collect-cpu-load.pm.log 00:03:46.693 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721038841_collect-bmc-pm.bmc.pm.log 00:03:47.629 12:20:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.629 12:20:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.629 12:20:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.629 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.629 12:20:42 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.629 12:20:42 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:47.629 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.889 12:20:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:47.889 12:20:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:47.889 12:20:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:47.889 12:20:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:47.889 12:20:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:47.889 12:20:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.889 12:20:42 -- common/autotest_common.sh@1455 -- # uname 00:03:47.889 12:20:42 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:47.889 12:20:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.889 12:20:42 -- common/autotest_common.sh@1475 -- # uname 00:03:47.889 12:20:42 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:47.889 12:20:42 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:47.889 12:20:42 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:47.889 12:20:42 -- spdk/autotest.sh@72 -- # hash lcov 00:03:47.889 12:20:42 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:47.889 12:20:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:47.889 12:20:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.889 12:20:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.889 12:20:42 -- spdk/autotest.sh@91 -- # rm -f 00:03:47.889 12:20:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.176 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:03:51.176 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:51.176 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:51.176 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:51.176 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:51.176 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:51.434 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:51.693 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:51.693 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:53.596 12:20:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:53.596 12:20:48 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.596 12:20:48 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.596 12:20:48 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.596 12:20:48 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.596 12:20:48 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.596 12:20:48 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.596 12:20:48 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.596 12:20:48 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.596 12:20:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:53.596 12:20:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.596 12:20:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:53.596 12:20:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:53.596 12:20:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:53.596 12:20:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.596 No valid GPT data, bailing 00:03:53.596 12:20:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.596 12:20:48 -- scripts/common.sh@391 -- # pt= 00:03:53.596 12:20:48 -- scripts/common.sh@392 -- # return 1 00:03:53.596 12:20:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.596 1+0 records in 00:03:53.596 1+0 records out 00:03:53.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676079 s, 155 MB/s 00:03:53.596 12:20:48 -- spdk/autotest.sh@118 -- # sync 00:03:53.596 12:20:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.596 12:20:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.596 12:20:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.945 12:20:53 -- spdk/autotest.sh@124 -- # uname -s 00:03:58.945 12:20:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:58.945 12:20:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:58.945 12:20:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.945 12:20:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.945 12:20:53 -- common/autotest_common.sh@10 -- # set +x 00:03:58.945 ************************************ 00:03:58.945 START TEST setup.sh 00:03:58.945 ************************************ 00:03:58.945 12:20:53 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:58.945 * Looking for test storage... 00:03:58.945 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:58.945 12:20:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:58.945 12:20:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:58.945 12:20:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:58.945 12:20:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.945 12:20:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.945 12:20:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.945 ************************************ 00:03:58.945 START TEST acl 00:03:58.945 ************************************ 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:58.945 * Looking for test storage... 00:03:58.945 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:58.945 12:20:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.945 12:20:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.945 12:20:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:58.945 12:20:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:58.945 12:20:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:58.945 12:20:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:58.945 12:20:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:58.945 12:20:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.945 12:20:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.511 12:20:59 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:05.511 12:20:59 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:05.511 12:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.511 12:20:59 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:05.511 12:20:59 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.511 12:20:59 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:08.045 Hugepages 00:04:08.045 node hugesize free / total 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 00:04:08.045 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:08.045 12:21:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:08.045 12:21:02 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.045 12:21:02 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.045 12:21:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.045 ************************************ 00:04:08.045 START TEST denied 00:04:08.045 ************************************ 00:04:08.045 12:21:03 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:08.045 12:21:03 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:04:08.045 12:21:03 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:08.045 12:21:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.045 12:21:03 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:08.045 12:21:03 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:04:13.313 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.314 12:21:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.875 00:04:19.875 real 0m11.022s 00:04:19.875 user 0m2.993s 00:04:19.875 sys 0m7.055s 00:04:19.875 12:21:14 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.875 12:21:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:19.875 ************************************ 00:04:19.875 END TEST denied 00:04:19.875 ************************************ 00:04:19.875 12:21:14 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:19.875 12:21:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:19.875 12:21:14 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.875 12:21:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.875 12:21:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.875 ************************************ 00:04:19.875 START TEST allowed 00:04:19.875 ************************************ 00:04:19.875 12:21:14 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:19.875 12:21:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:04:19.875 12:21:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:19.875 12:21:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:04:19.875 12:21:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.875 12:21:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:28.069 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.069 12:21:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:28.069 12:21:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:28.069 12:21:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:28.069 12:21:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.069 12:21:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.338 00:04:33.338 real 0m13.766s 00:04:33.338 user 0m3.336s 00:04:33.338 sys 0m7.145s 00:04:33.338 12:21:27 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.338 12:21:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:33.338 ************************************ 00:04:33.338 END TEST allowed 00:04:33.338 ************************************ 00:04:33.338 12:21:27 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:33.338 00:04:33.338 real 0m34.152s 00:04:33.338 user 0m9.483s 00:04:33.338 sys 0m20.609s 00:04:33.338 12:21:27 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.338 12:21:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.338 ************************************ 00:04:33.338 END TEST acl 00:04:33.338 ************************************ 00:04:33.338 12:21:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:33.338 12:21:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.338 12:21:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.338 12:21:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.338 12:21:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.338 ************************************ 00:04:33.338 START TEST hugepages 00:04:33.338 ************************************ 00:04:33.338 12:21:27 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.338 * Looking for test storage... 00:04:33.338 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 73580464 kB' 'MemAvailable: 77009488 kB' 'Buffers: 3728 kB' 'Cached: 11304572 kB' 'SwapCached: 0 kB' 'Active: 8300148 kB' 'Inactive: 3517780 kB' 'Active(anon): 7831860 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512956 kB' 'Mapped: 187828 kB' 'Shmem: 7322232 kB' 'KReclaimable: 192724 kB' 'Slab: 550092 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 357368 kB' 'KernelStack: 16384 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 9256364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211832 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.338 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:33.339 12:21:28 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:33.339 12:21:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.339 12:21:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.339 12:21:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.339 ************************************ 00:04:33.339 START TEST default_setup 00:04:33.339 ************************************ 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.339 12:21:28 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:36.623 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.623 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.909 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75731772 kB' 'MemAvailable: 79160796 kB' 'Buffers: 3728 kB' 'Cached: 11304728 kB' 'SwapCached: 0 kB' 'Active: 8314200 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845912 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525808 kB' 'Mapped: 187644 kB' 'Shmem: 7322388 kB' 'KReclaimable: 192724 kB' 'Slab: 548616 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355892 kB' 'KernelStack: 16400 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9270028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.819 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75732288 kB' 'MemAvailable: 79161312 kB' 'Buffers: 3728 kB' 'Cached: 11304732 kB' 'SwapCached: 0 kB' 'Active: 8314240 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845952 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526316 kB' 'Mapped: 187620 kB' 'Shmem: 7322392 kB' 'KReclaimable: 192724 kB' 'Slab: 548616 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355892 kB' 'KernelStack: 16384 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9270044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75732512 kB' 'MemAvailable: 79161536 kB' 'Buffers: 3728 kB' 'Cached: 11304752 kB' 'SwapCached: 0 kB' 'Active: 8313768 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845480 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526340 kB' 'Mapped: 187544 kB' 'Shmem: 7322412 kB' 'KReclaimable: 192724 kB' 'Slab: 548592 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355868 kB' 'KernelStack: 16400 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9270064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.824 nr_hugepages=1024 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.824 resv_hugepages=0 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.824 surplus_hugepages=0 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.824 anon_hugepages=0 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.824 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75733156 kB' 'MemAvailable: 79162180 kB' 'Buffers: 3728 kB' 'Cached: 11304772 kB' 'SwapCached: 0 kB' 'Active: 8313796 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845508 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526340 kB' 'Mapped: 187544 kB' 'Shmem: 7322432 kB' 'KReclaimable: 192724 kB' 'Slab: 548592 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355868 kB' 'KernelStack: 16400 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9270088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.826 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41333900 kB' 'MemUsed: 6736012 kB' 'SwapCached: 0 kB' 'Active: 2585956 kB' 'Inactive: 104868 kB' 'Active(anon): 2293032 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2401768 kB' 'Mapped: 95228 kB' 'AnonPages: 292196 kB' 'Shmem: 2003976 kB' 'KernelStack: 8664 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240632 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 176872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.828 node0=1024 expecting 1024 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.828 00:04:41.828 real 0m8.622s 00:04:41.828 user 0m1.880s 00:04:41.828 sys 0m3.662s 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.828 12:21:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:41.828 ************************************ 00:04:41.828 END TEST default_setup 00:04:41.828 ************************************ 00:04:41.828 12:21:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:41.828 12:21:36 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:41.829 12:21:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.829 12:21:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.829 12:21:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.829 ************************************ 00:04:41.829 START TEST per_node_1G_alloc 00:04:41.829 ************************************ 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.829 12:21:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:44.365 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.365 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:44.365 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75788080 kB' 'MemAvailable: 79217104 kB' 'Buffers: 3728 kB' 'Cached: 11304896 kB' 'SwapCached: 0 kB' 'Active: 8312212 kB' 'Inactive: 3517780 kB' 'Active(anon): 7843924 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524384 kB' 'Mapped: 186996 kB' 'Shmem: 7322556 kB' 'KReclaimable: 192724 kB' 'Slab: 548952 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 356228 kB' 'KernelStack: 16224 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9260716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211960 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.901 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.902 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75787304 kB' 'MemAvailable: 79216328 kB' 'Buffers: 3728 kB' 'Cached: 11304900 kB' 'SwapCached: 0 kB' 'Active: 8312272 kB' 'Inactive: 3517780 kB' 'Active(anon): 7843984 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524868 kB' 'Mapped: 186856 kB' 'Shmem: 7322560 kB' 'KReclaimable: 192724 kB' 'Slab: 548956 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 356232 kB' 'KernelStack: 16208 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9261724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211928 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.903 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.904 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75787172 kB' 'MemAvailable: 79216196 kB' 'Buffers: 3728 kB' 'Cached: 11304920 kB' 'SwapCached: 0 kB' 'Active: 8312112 kB' 'Inactive: 3517780 kB' 'Active(anon): 7843824 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524684 kB' 'Mapped: 186856 kB' 'Shmem: 7322580 kB' 'KReclaimable: 192724 kB' 'Slab: 548956 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 356232 kB' 'KernelStack: 16176 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9263308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.905 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.906 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.907 nr_hugepages=1024 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.907 resv_hugepages=0 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.907 surplus_hugepages=0 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.907 anon_hugepages=0 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75785980 kB' 'MemAvailable: 79215004 kB' 'Buffers: 3728 kB' 'Cached: 11304940 kB' 'SwapCached: 0 kB' 'Active: 8312060 kB' 'Inactive: 3517780 kB' 'Active(anon): 7843772 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524560 kB' 'Mapped: 186856 kB' 'Shmem: 7322600 kB' 'KReclaimable: 192724 kB' 'Slab: 548956 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 356232 kB' 'KernelStack: 16320 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9262960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211944 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.907 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.908 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42416364 kB' 'MemUsed: 5653548 kB' 'SwapCached: 0 kB' 'Active: 2584192 kB' 'Inactive: 104868 kB' 'Active(anon): 2291268 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2401856 kB' 'Mapped: 94448 kB' 'AnonPages: 290452 kB' 'Shmem: 2004064 kB' 'KernelStack: 8600 kB' 'PageTables: 3528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240964 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 177204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.909 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33366072 kB' 'MemUsed: 10857548 kB' 'SwapCached: 0 kB' 'Active: 5728356 kB' 'Inactive: 3412912 kB' 'Active(anon): 5552992 kB' 'Inactive(anon): 0 kB' 'Active(file): 175364 kB' 'Inactive(file): 3412912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8906836 kB' 'Mapped: 92408 kB' 'AnonPages: 234568 kB' 'Shmem: 5318560 kB' 'KernelStack: 7736 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128964 kB' 'Slab: 307992 kB' 'SReclaimable: 128964 kB' 'SUnreclaim: 179028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.910 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.911 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.912 node0=512 expecting 512 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:46.912 node1=512 expecting 512 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.912 00:04:46.912 real 0m4.787s 00:04:46.912 user 0m1.456s 00:04:46.912 sys 0m3.291s 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.912 12:21:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.912 ************************************ 00:04:46.912 END TEST per_node_1G_alloc 00:04:46.912 ************************************ 00:04:46.912 12:21:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.912 12:21:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:46.912 12:21:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.912 12:21:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.912 12:21:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.912 ************************************ 00:04:46.912 START TEST even_2G_alloc 00:04:46.912 ************************************ 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.912 12:21:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:49.448 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.448 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.448 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.449 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.449 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.449 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.449 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.449 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.449 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:51.352 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:51.352 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.352 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.352 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.352 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75825488 kB' 'MemAvailable: 79254512 kB' 'Buffers: 3728 kB' 'Cached: 11305084 kB' 'SwapCached: 0 kB' 'Active: 8313228 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844940 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525000 kB' 'Mapped: 187044 kB' 'Shmem: 7322744 kB' 'KReclaimable: 192724 kB' 'Slab: 548072 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355348 kB' 'KernelStack: 16224 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9261708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.616 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.617 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75824732 kB' 'MemAvailable: 79253756 kB' 'Buffers: 3728 kB' 'Cached: 11305088 kB' 'SwapCached: 0 kB' 'Active: 8313008 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844720 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525304 kB' 'Mapped: 186928 kB' 'Shmem: 7322748 kB' 'KReclaimable: 192724 kB' 'Slab: 548060 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355336 kB' 'KernelStack: 16224 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9261724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.618 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.619 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75825220 kB' 'MemAvailable: 79254244 kB' 'Buffers: 3728 kB' 'Cached: 11305108 kB' 'SwapCached: 0 kB' 'Active: 8313044 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844756 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525300 kB' 'Mapped: 186928 kB' 'Shmem: 7322768 kB' 'KReclaimable: 192724 kB' 'Slab: 548060 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355336 kB' 'KernelStack: 16224 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9261748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.620 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.621 nr_hugepages=1024 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.621 resv_hugepages=0 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.621 surplus_hugepages=0 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.621 anon_hugepages=0 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.621 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75824720 kB' 'MemAvailable: 79253744 kB' 'Buffers: 3728 kB' 'Cached: 11305128 kB' 'SwapCached: 0 kB' 'Active: 8313060 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844772 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525300 kB' 'Mapped: 186928 kB' 'Shmem: 7322788 kB' 'KReclaimable: 192724 kB' 'Slab: 548060 kB' 'SReclaimable: 192724 kB' 'SUnreclaim: 355336 kB' 'KernelStack: 16224 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9261768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.622 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42437824 kB' 'MemUsed: 5632088 kB' 'SwapCached: 0 kB' 'Active: 2584304 kB' 'Inactive: 104868 kB' 'Active(anon): 2291380 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2401980 kB' 'Mapped: 94512 kB' 'AnonPages: 290332 kB' 'Shmem: 2004188 kB' 'KernelStack: 8616 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240040 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 176280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.623 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:51.624 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33386896 kB' 'MemUsed: 10836724 kB' 'SwapCached: 0 kB' 'Active: 5728808 kB' 'Inactive: 3412912 kB' 'Active(anon): 5553444 kB' 'Inactive(anon): 0 kB' 'Active(file): 175364 kB' 'Inactive(file): 3412912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8906900 kB' 'Mapped: 92416 kB' 'AnonPages: 234984 kB' 'Shmem: 5318624 kB' 'KernelStack: 7608 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128964 kB' 'Slab: 308020 kB' 'SReclaimable: 128964 kB' 'SUnreclaim: 179056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.625 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.626 node0=512 expecting 512 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:51.626 node1=512 expecting 512 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.626 00:04:51.626 real 0m4.967s 00:04:51.626 user 0m1.534s 00:04:51.626 sys 0m3.345s 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.626 12:21:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.626 ************************************ 00:04:51.626 END TEST even_2G_alloc 00:04:51.626 ************************************ 00:04:51.626 12:21:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:51.626 12:21:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:51.626 12:21:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.626 12:21:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.626 12:21:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.886 ************************************ 00:04:51.886 START TEST odd_alloc 00:04:51.886 ************************************ 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.886 12:21:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:54.418 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.418 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:54.418 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75818412 kB' 'MemAvailable: 79247444 kB' 'Buffers: 3728 kB' 'Cached: 11305264 kB' 'SwapCached: 0 kB' 'Active: 8313928 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845640 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525272 kB' 'Mapped: 187084 kB' 'Shmem: 7322924 kB' 'KReclaimable: 192740 kB' 'Slab: 548268 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355528 kB' 'KernelStack: 16304 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9261912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.001 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:57.002 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75819016 kB' 'MemAvailable: 79248048 kB' 'Buffers: 3728 kB' 'Cached: 11305268 kB' 'SwapCached: 0 kB' 'Active: 8313184 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844896 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525452 kB' 'Mapped: 186996 kB' 'Shmem: 7322928 kB' 'KReclaimable: 192740 kB' 'Slab: 548256 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355516 kB' 'KernelStack: 16320 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9262060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.003 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75818264 kB' 'MemAvailable: 79247296 kB' 'Buffers: 3728 kB' 'Cached: 11305292 kB' 'SwapCached: 0 kB' 'Active: 8312932 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844644 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525236 kB' 'Mapped: 186996 kB' 'Shmem: 7322952 kB' 'KReclaimable: 192740 kB' 'Slab: 548256 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355516 kB' 'KernelStack: 16320 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9262448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.004 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.005 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:57.006 nr_hugepages=1025 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.006 resv_hugepages=0 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.006 surplus_hugepages=0 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.006 anon_hugepages=0 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75818468 kB' 'MemAvailable: 79247500 kB' 'Buffers: 3728 kB' 'Cached: 11305316 kB' 'SwapCached: 0 kB' 'Active: 8313396 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845108 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525680 kB' 'Mapped: 186996 kB' 'Shmem: 7322976 kB' 'KReclaimable: 192740 kB' 'Slab: 548248 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355508 kB' 'KernelStack: 16336 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 9262468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211880 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.006 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.007 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42457028 kB' 'MemUsed: 5612884 kB' 'SwapCached: 0 kB' 'Active: 2585156 kB' 'Inactive: 104868 kB' 'Active(anon): 2292232 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2402104 kB' 'Mapped: 94572 kB' 'AnonPages: 291292 kB' 'Shmem: 2004312 kB' 'KernelStack: 8632 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240076 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 176316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.008 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 33361440 kB' 'MemUsed: 10862180 kB' 'SwapCached: 0 kB' 'Active: 5728240 kB' 'Inactive: 3412912 kB' 'Active(anon): 5552876 kB' 'Inactive(anon): 0 kB' 'Active(file): 175364 kB' 'Inactive(file): 3412912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8906940 kB' 'Mapped: 92424 kB' 'AnonPages: 234424 kB' 'Shmem: 5318664 kB' 'KernelStack: 7720 kB' 'PageTables: 4860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128980 kB' 'Slab: 308172 kB' 'SReclaimable: 128980 kB' 'SUnreclaim: 179192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.009 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.010 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:57.011 node0=512 expecting 513 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:57.011 node1=513 expecting 512 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:57.011 00:04:57.011 real 0m4.925s 00:04:57.011 user 0m1.512s 00:04:57.011 sys 0m3.397s 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.011 12:21:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:57.011 ************************************ 00:04:57.011 END TEST odd_alloc 00:04:57.011 ************************************ 00:04:57.011 12:21:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:57.011 12:21:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:57.011 12:21:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.011 12:21:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.011 12:21:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.011 ************************************ 00:04:57.011 START TEST custom_alloc 00:04:57.011 ************************************ 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.011 12:21:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:00.301 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.301 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.301 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:02.212 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74785000 kB' 'MemAvailable: 78214032 kB' 'Buffers: 3728 kB' 'Cached: 11305452 kB' 'SwapCached: 0 kB' 'Active: 8314672 kB' 'Inactive: 3517780 kB' 'Active(anon): 7846384 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526060 kB' 'Mapped: 187100 kB' 'Shmem: 7323112 kB' 'KReclaimable: 192740 kB' 'Slab: 547832 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355092 kB' 'KernelStack: 16256 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9273084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211992 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.213 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74784404 kB' 'MemAvailable: 78213436 kB' 'Buffers: 3728 kB' 'Cached: 11305456 kB' 'SwapCached: 0 kB' 'Active: 8314564 kB' 'Inactive: 3517780 kB' 'Active(anon): 7846276 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525940 kB' 'Mapped: 187068 kB' 'Shmem: 7323116 kB' 'KReclaimable: 192740 kB' 'Slab: 547824 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355084 kB' 'KernelStack: 16208 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9262920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211928 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.214 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.215 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74785172 kB' 'MemAvailable: 78214204 kB' 'Buffers: 3728 kB' 'Cached: 11305472 kB' 'SwapCached: 0 kB' 'Active: 8314024 kB' 'Inactive: 3517780 kB' 'Active(anon): 7845736 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525808 kB' 'Mapped: 186992 kB' 'Shmem: 7323132 kB' 'KReclaimable: 192740 kB' 'Slab: 547824 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355084 kB' 'KernelStack: 16176 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9263072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.216 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.217 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:02.218 nr_hugepages=1536 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.218 resv_hugepages=0 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.218 surplus_hugepages=0 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.218 anon_hugepages=0 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 74784888 kB' 'MemAvailable: 78213920 kB' 'Buffers: 3728 kB' 'Cached: 11305480 kB' 'SwapCached: 0 kB' 'Active: 8313268 kB' 'Inactive: 3517780 kB' 'Active(anon): 7844980 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525048 kB' 'Mapped: 186992 kB' 'Shmem: 7323140 kB' 'KReclaimable: 192740 kB' 'Slab: 547824 kB' 'SReclaimable: 192740 kB' 'SUnreclaim: 355084 kB' 'KernelStack: 16192 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 9263100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.218 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.219 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 42468352 kB' 'MemUsed: 5601560 kB' 'SwapCached: 0 kB' 'Active: 2585708 kB' 'Inactive: 104868 kB' 'Active(anon): 2292784 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2402276 kB' 'Mapped: 94560 kB' 'AnonPages: 291448 kB' 'Shmem: 2004484 kB' 'KernelStack: 8632 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240184 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 176424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.220 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 32316536 kB' 'MemUsed: 11907084 kB' 'SwapCached: 0 kB' 'Active: 5728752 kB' 'Inactive: 3412912 kB' 'Active(anon): 5553388 kB' 'Inactive(anon): 0 kB' 'Active(file): 175364 kB' 'Inactive(file): 3412912 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8906976 kB' 'Mapped: 92432 kB' 'AnonPages: 234772 kB' 'Shmem: 5318700 kB' 'KernelStack: 7624 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128980 kB' 'Slab: 307640 kB' 'SReclaimable: 128980 kB' 'SUnreclaim: 178660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.221 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.222 node0=512 expecting 512 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:02.222 node1=1024 expecting 1024 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:02.222 00:05:02.222 real 0m5.377s 00:05:02.222 user 0m1.876s 00:05:02.222 sys 0m3.492s 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.222 12:21:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.222 ************************************ 00:05:02.222 END TEST custom_alloc 00:05:02.222 ************************************ 00:05:02.222 12:21:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.222 12:21:57 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:02.222 12:21:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.222 12:21:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.222 12:21:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.222 ************************************ 00:05:02.222 START TEST no_shrink_alloc 00:05:02.222 ************************************ 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.222 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.223 12:21:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:05.511 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:05.511 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:05.511 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.426 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75838836 kB' 'MemAvailable: 79267884 kB' 'Buffers: 3728 kB' 'Cached: 11305640 kB' 'SwapCached: 0 kB' 'Active: 8316812 kB' 'Inactive: 3517780 kB' 'Active(anon): 7848524 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528044 kB' 'Mapped: 187276 kB' 'Shmem: 7323300 kB' 'KReclaimable: 192772 kB' 'Slab: 547760 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 354988 kB' 'KernelStack: 16256 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9264100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.427 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75839416 kB' 'MemAvailable: 79268464 kB' 'Buffers: 3728 kB' 'Cached: 11305644 kB' 'SwapCached: 0 kB' 'Active: 8315712 kB' 'Inactive: 3517780 kB' 'Active(anon): 7847424 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527408 kB' 'Mapped: 187140 kB' 'Shmem: 7323304 kB' 'KReclaimable: 192772 kB' 'Slab: 547752 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 354980 kB' 'KernelStack: 16240 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9264120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.428 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.429 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75840328 kB' 'MemAvailable: 79269376 kB' 'Buffers: 3728 kB' 'Cached: 11305660 kB' 'SwapCached: 0 kB' 'Active: 8316012 kB' 'Inactive: 3517780 kB' 'Active(anon): 7847724 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527660 kB' 'Mapped: 187140 kB' 'Shmem: 7323320 kB' 'KReclaimable: 192772 kB' 'Slab: 547752 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 354980 kB' 'KernelStack: 16240 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9264140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.430 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.431 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.432 nr_hugepages=1024 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.432 resv_hugepages=0 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.432 surplus_hugepages=0 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.432 anon_hugepages=0 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75840328 kB' 'MemAvailable: 79269376 kB' 'Buffers: 3728 kB' 'Cached: 11305684 kB' 'SwapCached: 0 kB' 'Active: 8316028 kB' 'Inactive: 3517780 kB' 'Active(anon): 7847740 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527660 kB' 'Mapped: 187140 kB' 'Shmem: 7323344 kB' 'KReclaimable: 192772 kB' 'Slab: 547752 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 354980 kB' 'KernelStack: 16240 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9264164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211896 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.432 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.433 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41425332 kB' 'MemUsed: 6644580 kB' 'SwapCached: 0 kB' 'Active: 2585064 kB' 'Inactive: 104868 kB' 'Active(anon): 2292140 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2402348 kB' 'Mapped: 94696 kB' 'AnonPages: 290756 kB' 'Shmem: 2004556 kB' 'KernelStack: 8616 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240116 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 176356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.434 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.435 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.436 node0=1024 expecting 1024 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.436 12:22:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:10.727 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.727 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:10.727 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:12.103 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75840116 kB' 'MemAvailable: 79269164 kB' 'Buffers: 3728 kB' 'Cached: 11305812 kB' 'SwapCached: 0 kB' 'Active: 8317860 kB' 'Inactive: 3517780 kB' 'Active(anon): 7849572 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529672 kB' 'Mapped: 187344 kB' 'Shmem: 7323472 kB' 'KReclaimable: 192772 kB' 'Slab: 547956 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 355184 kB' 'KernelStack: 16512 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9267552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212120 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.368 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.369 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75838476 kB' 'MemAvailable: 79267524 kB' 'Buffers: 3728 kB' 'Cached: 11305812 kB' 'SwapCached: 0 kB' 'Active: 8318864 kB' 'Inactive: 3517780 kB' 'Active(anon): 7850576 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530228 kB' 'Mapped: 187296 kB' 'Shmem: 7323472 kB' 'KReclaimable: 192772 kB' 'Slab: 548184 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 355412 kB' 'KernelStack: 16736 kB' 'PageTables: 9496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9267568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212072 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.370 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.371 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75839404 kB' 'MemAvailable: 79268452 kB' 'Buffers: 3728 kB' 'Cached: 11305832 kB' 'SwapCached: 0 kB' 'Active: 8318344 kB' 'Inactive: 3517780 kB' 'Active(anon): 7850056 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529704 kB' 'Mapped: 187196 kB' 'Shmem: 7323492 kB' 'KReclaimable: 192772 kB' 'Slab: 548188 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 355416 kB' 'KernelStack: 16624 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9264984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.372 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.373 nr_hugepages=1024 00:05:12.373 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.373 resv_hugepages=0 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.374 surplus_hugepages=0 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.374 anon_hugepages=0 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293532 kB' 'MemFree: 75840772 kB' 'MemAvailable: 79269820 kB' 'Buffers: 3728 kB' 'Cached: 11305872 kB' 'SwapCached: 0 kB' 'Active: 8316940 kB' 'Inactive: 3517780 kB' 'Active(anon): 7848652 kB' 'Inactive(anon): 0 kB' 'Active(file): 468288 kB' 'Inactive(file): 3517780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528380 kB' 'Mapped: 187196 kB' 'Shmem: 7323532 kB' 'KReclaimable: 192772 kB' 'Slab: 548096 kB' 'SReclaimable: 192772 kB' 'SUnreclaim: 355324 kB' 'KernelStack: 16368 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 9265008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211912 kB' 'VmallocChunk: 0 kB' 'Percpu: 57600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 931264 kB' 'DirectMap2M: 13424640 kB' 'DirectMap1G: 87031808 kB' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.374 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.375 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069912 kB' 'MemFree: 41409852 kB' 'MemUsed: 6660060 kB' 'SwapCached: 0 kB' 'Active: 2586564 kB' 'Inactive: 104868 kB' 'Active(anon): 2293640 kB' 'Inactive(anon): 0 kB' 'Active(file): 292924 kB' 'Inactive(file): 104868 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2402380 kB' 'Mapped: 94756 kB' 'AnonPages: 292212 kB' 'Shmem: 2004588 kB' 'KernelStack: 8776 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63760 kB' 'Slab: 240256 kB' 'SReclaimable: 63760 kB' 'SUnreclaim: 176496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.376 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.377 node0=1024 expecting 1024 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.377 00:05:12.377 real 0m10.268s 00:05:12.377 user 0m3.315s 00:05:12.377 sys 0m6.960s 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.377 12:22:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.377 ************************************ 00:05:12.377 END TEST no_shrink_alloc 00:05:12.377 ************************************ 00:05:12.636 12:22:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:12.636 12:22:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:12.636 00:05:12.636 real 0m39.535s 00:05:12.636 user 0m11.791s 00:05:12.636 sys 0m24.566s 00:05:12.636 12:22:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.636 12:22:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.636 ************************************ 00:05:12.636 END TEST hugepages 00:05:12.636 ************************************ 00:05:12.636 12:22:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:12.636 12:22:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:12.636 12:22:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.636 12:22:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.636 12:22:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.636 ************************************ 00:05:12.636 START TEST driver 00:05:12.636 ************************************ 00:05:12.636 12:22:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:12.636 * Looking for test storage... 00:05:12.636 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:12.636 12:22:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:12.636 12:22:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.636 12:22:07 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.205 12:22:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:19.205 12:22:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.205 12:22:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.205 12:22:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.205 ************************************ 00:05:19.205 START TEST guess_driver 00:05:19.205 ************************************ 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:19.205 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:19.205 Looking for driver=vfio-pci 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.205 12:22:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:22.525 12:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:22.525 12:22:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.863 12:22:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.863 12:22:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.863 12:22:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.236 12:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:27.236 12:22:22 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:27.236 12:22:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.236 12:22:22 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:33.810 00:05:33.810 real 0m14.095s 00:05:33.810 user 0m2.688s 00:05:33.810 sys 0m7.156s 00:05:33.810 12:22:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.810 12:22:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:33.810 ************************************ 00:05:33.810 END TEST guess_driver 00:05:33.810 ************************************ 00:05:33.810 12:22:28 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:33.810 00:05:33.810 real 0m20.747s 00:05:33.810 user 0m4.674s 00:05:33.810 sys 0m11.032s 00:05:33.810 12:22:28 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.810 12:22:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:33.810 ************************************ 00:05:33.810 END TEST driver 00:05:33.810 ************************************ 00:05:33.810 12:22:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:33.810 12:22:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:33.810 12:22:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.810 12:22:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.810 12:22:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.810 ************************************ 00:05:33.810 START TEST devices 00:05:33.810 ************************************ 00:05:33.810 12:22:28 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:33.810 * Looking for test storage... 00:05:33.810 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:33.810 12:22:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:33.810 12:22:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:33.810 12:22:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.810 12:22:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.078 12:22:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:39.078 12:22:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:39.078 12:22:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:39.078 12:22:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:39.078 No valid GPT data, bailing 00:05:39.078 12:22:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.078 12:22:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:39.078 12:22:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:39.078 12:22:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:39.078 12:22:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:39.078 12:22:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:39.078 12:22:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:39.078 12:22:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.078 12:22:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.078 12:22:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:39.078 ************************************ 00:05:39.078 START TEST nvme_mount 00:05:39.078 ************************************ 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:39.078 12:22:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:40.014 Creating new GPT entries in memory. 00:05:40.014 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:40.014 other utilities. 00:05:40.014 12:22:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:40.014 12:22:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.014 12:22:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:40.014 12:22:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:40.014 12:22:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:41.389 Creating new GPT entries in memory. 00:05:41.389 The operation has completed successfully. 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4124145 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.389 12:22:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:44.671 12:22:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.046 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:46.046 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:46.046 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.046 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.046 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:46.305 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:46.305 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:46.564 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:46.564 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:46.564 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:46.564 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.564 12:22:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:49.851 12:22:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.223 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.224 12:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.443 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:55.444 12:22:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:56.820 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:56.820 00:05:56.820 real 0m17.706s 00:05:56.820 user 0m5.061s 00:05:56.820 sys 0m10.337s 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.820 12:22:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:56.820 ************************************ 00:05:56.820 END TEST nvme_mount 00:05:56.820 ************************************ 00:05:56.820 12:22:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:56.820 12:22:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:56.820 12:22:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.820 12:22:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.820 12:22:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:56.820 ************************************ 00:05:56.820 START TEST dm_mount 00:05:56.820 ************************************ 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:56.820 12:22:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:57.757 Creating new GPT entries in memory. 00:05:57.758 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:57.758 other utilities. 00:05:57.758 12:22:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:57.758 12:22:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.758 12:22:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:57.758 12:22:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:57.758 12:22:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:59.132 Creating new GPT entries in memory. 00:05:59.132 The operation has completed successfully. 00:05:59.132 12:22:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:59.132 12:22:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:59.132 12:22:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:59.132 12:22:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:59.132 12:22:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:00.068 The operation has completed successfully. 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4128946 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:00.068 12:22:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.068 12:22:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.377 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:03.378 12:22:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.280 12:23:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:06:08.563 12:23:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:10.464 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:10.464 00:06:10.464 real 0m13.575s 00:06:10.464 user 0m3.590s 00:06:10.464 sys 0m6.969s 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.464 12:23:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:10.464 ************************************ 00:06:10.464 END TEST dm_mount 00:06:10.464 ************************************ 00:06:10.464 12:23:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.464 12:23:05 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:10.722 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:10.722 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:10.722 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:10.722 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.722 12:23:05 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:10.722 00:06:10.722 real 0m37.322s 00:06:10.722 user 0m10.679s 00:06:10.722 sys 0m21.170s 00:06:10.722 12:23:05 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.722 12:23:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:10.722 ************************************ 00:06:10.722 END TEST devices 00:06:10.722 ************************************ 00:06:10.722 12:23:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:10.722 00:06:10.722 real 2m12.176s 00:06:10.722 user 0m36.783s 00:06:10.722 sys 1m17.673s 00:06:10.722 12:23:05 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.722 12:23:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:10.722 ************************************ 00:06:10.722 END TEST setup.sh 00:06:10.722 ************************************ 00:06:10.722 12:23:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.722 12:23:05 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:14.007 Hugepages 00:06:14.007 node hugesize free / total 00:06:14.007 node0 1048576kB 0 / 0 00:06:14.007 node0 2048kB 2048 / 2048 00:06:14.007 node1 1048576kB 0 / 0 00:06:14.007 node1 2048kB 0 / 0 00:06:14.007 00:06:14.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.007 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:14.007 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:14.007 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:14.007 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:14.007 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:14.007 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:14.007 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:14.267 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:14.267 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:14.267 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:14.267 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:14.267 12:23:09 -- spdk/autotest.sh@130 -- # uname -s 00:06:14.267 12:23:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:14.267 12:23:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:14.267 12:23:09 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:17.556 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:17.556 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:20.885 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:22.258 12:23:17 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:23.631 12:23:18 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:23.631 12:23:18 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:23.631 12:23:18 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:23.631 12:23:18 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:23.631 12:23:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:23.631 12:23:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:23.631 12:23:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:23.631 12:23:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:23.632 12:23:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:23.632 12:23:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:23.632 12:23:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:06:23.632 12:23:18 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:26.920 Waiting for block devices as requested 00:06:26.920 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:06:27.178 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:27.178 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:27.178 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:27.178 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:27.436 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:27.436 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:27.436 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:27.693 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:27.694 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:27.694 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:27.952 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:27.952 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:27.952 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:28.210 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:28.210 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:28.210 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:30.110 12:23:25 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:30.110 12:23:25 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1502 -- # grep 0000:1a:00.0/nvme/nvme 00:06:30.110 12:23:25 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:06:30.110 12:23:25 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:30.110 12:23:25 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:30.110 12:23:25 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:30.110 12:23:25 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:30.369 12:23:25 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:30.369 12:23:25 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:30.369 12:23:25 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:30.369 12:23:25 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:30.369 12:23:25 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:30.369 12:23:25 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:30.369 12:23:25 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:30.369 12:23:25 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:30.369 12:23:25 -- common/autotest_common.sh@1557 -- # continue 00:06:30.369 12:23:25 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:30.369 12:23:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.369 12:23:25 -- common/autotest_common.sh@10 -- # set +x 00:06:30.369 12:23:25 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:30.369 12:23:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.369 12:23:25 -- common/autotest_common.sh@10 -- # set +x 00:06:30.369 12:23:25 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:33.657 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:33.657 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:36.943 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:38.840 12:23:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:38.840 12:23:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.840 12:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.840 12:23:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:38.840 12:23:33 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:38.840 12:23:33 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:38.840 12:23:33 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:38.840 12:23:33 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:38.840 12:23:33 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:38.840 12:23:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:38.840 12:23:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:38.840 12:23:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:38.840 12:23:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:38.840 12:23:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:38.840 12:23:33 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:38.840 12:23:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:1a:00.0 00:06:38.840 12:23:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:38.840 12:23:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:06:38.840 12:23:33 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:38.840 12:23:33 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:38.840 12:23:33 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:38.840 12:23:33 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:1a:00.0 00:06:38.840 12:23:33 -- common/autotest_common.sh@1592 -- # [[ -z 0000:1a:00.0 ]] 00:06:38.840 12:23:33 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4138887 00:06:38.840 12:23:33 -- common/autotest_common.sh@1598 -- # waitforlisten 4138887 00:06:38.840 12:23:33 -- common/autotest_common.sh@829 -- # '[' -z 4138887 ']' 00:06:38.840 12:23:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.840 12:23:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.840 12:23:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.840 12:23:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.840 12:23:33 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.840 12:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.840 [2024-07-15 12:23:33.735116] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:06:38.841 [2024-07-15 12:23:33.735202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138887 ] 00:06:38.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.841 [2024-07-15 12:23:33.808752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.841 [2024-07-15 12:23:33.898879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.774 12:23:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.774 12:23:34 -- common/autotest_common.sh@862 -- # return 0 00:06:39.774 12:23:34 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:39.774 12:23:34 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:39.774 12:23:34 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:06:43.051 nvme0n1 00:06:43.051 12:23:37 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:43.051 [2024-07-15 12:23:37.723453] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:43.051 request: 00:06:43.051 { 00:06:43.051 "nvme_ctrlr_name": "nvme0", 00:06:43.051 "password": "test", 00:06:43.051 "method": "bdev_nvme_opal_revert", 00:06:43.051 "req_id": 1 00:06:43.051 } 00:06:43.051 Got JSON-RPC error response 00:06:43.051 response: 00:06:43.051 { 00:06:43.051 "code": -32602, 00:06:43.051 "message": "Invalid parameters" 00:06:43.051 } 00:06:43.051 12:23:37 -- common/autotest_common.sh@1604 -- # true 00:06:43.051 12:23:37 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:43.051 12:23:37 -- common/autotest_common.sh@1608 -- # killprocess 4138887 00:06:43.051 12:23:37 -- common/autotest_common.sh@948 -- # '[' -z 4138887 ']' 00:06:43.051 12:23:37 -- common/autotest_common.sh@952 -- # kill -0 4138887 00:06:43.051 12:23:37 -- common/autotest_common.sh@953 -- # uname 00:06:43.051 12:23:37 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.051 12:23:37 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4138887 00:06:43.051 12:23:37 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.051 12:23:37 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.051 12:23:37 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4138887' 00:06:43.051 killing process with pid 4138887 00:06:43.051 12:23:37 -- common/autotest_common.sh@967 -- # kill 4138887 00:06:43.051 12:23:37 -- common/autotest_common.sh@972 -- # wait 4138887 00:06:47.271 12:23:41 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:47.271 12:23:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:47.271 12:23:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:47.271 12:23:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:47.271 12:23:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:47.271 12:23:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.271 12:23:41 -- common/autotest_common.sh@10 -- # set +x 00:06:47.271 12:23:41 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:47.271 12:23:41 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:47.271 12:23:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.271 12:23:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.271 12:23:41 -- common/autotest_common.sh@10 -- # set +x 00:06:47.271 ************************************ 00:06:47.271 START TEST env 00:06:47.271 ************************************ 00:06:47.271 12:23:41 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:47.271 * Looking for test storage... 00:06:47.271 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:47.271 12:23:41 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:47.271 12:23:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.271 12:23:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.271 12:23:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:47.271 ************************************ 00:06:47.271 START TEST env_memory 00:06:47.271 ************************************ 00:06:47.271 12:23:41 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:47.271 00:06:47.271 00:06:47.271 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.271 http://cunit.sourceforge.net/ 00:06:47.271 00:06:47.271 00:06:47.271 Suite: memory 00:06:47.271 Test: alloc and free memory map ...[2024-07-15 12:23:41.922035] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:47.271 passed 00:06:47.271 Test: mem map translation ...[2024-07-15 12:23:41.935271] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:47.271 [2024-07-15 12:23:41.935289] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:47.271 [2024-07-15 12:23:41.935320] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:47.271 [2024-07-15 12:23:41.935329] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:47.271 passed 00:06:47.271 Test: mem map registration ...[2024-07-15 12:23:41.956397] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:47.271 [2024-07-15 12:23:41.956413] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:47.271 passed 00:06:47.271 Test: mem map adjacent registrations ...passed 00:06:47.271 00:06:47.271 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.271 suites 1 1 n/a 0 0 00:06:47.271 tests 4 4 4 0 0 00:06:47.271 asserts 152 152 152 0 n/a 00:06:47.271 00:06:47.271 Elapsed time = 0.084 seconds 00:06:47.271 00:06:47.271 real 0m0.093s 00:06:47.271 user 0m0.083s 00:06:47.271 sys 0m0.009s 00:06:47.271 12:23:41 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.271 12:23:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:47.271 ************************************ 00:06:47.271 END TEST env_memory 00:06:47.271 ************************************ 00:06:47.271 12:23:42 env -- common/autotest_common.sh@1142 -- # return 0 00:06:47.271 12:23:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:47.271 12:23:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.271 12:23:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.271 12:23:42 env -- common/autotest_common.sh@10 -- # set +x 00:06:47.271 ************************************ 00:06:47.271 START TEST env_vtophys 00:06:47.271 ************************************ 00:06:47.271 12:23:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:47.271 EAL: lib.eal log level changed from notice to debug 00:06:47.271 EAL: Detected lcore 0 as core 0 on socket 0 00:06:47.271 EAL: Detected lcore 1 as core 1 on socket 0 00:06:47.271 EAL: Detected lcore 2 as core 2 on socket 0 00:06:47.271 EAL: Detected lcore 3 as core 3 on socket 0 00:06:47.271 EAL: Detected lcore 4 as core 4 on socket 0 00:06:47.271 EAL: Detected lcore 5 as core 8 on socket 0 00:06:47.271 EAL: Detected lcore 6 as core 9 on socket 0 00:06:47.271 EAL: Detected lcore 7 as core 10 on socket 0 00:06:47.271 EAL: Detected lcore 8 as core 11 on socket 0 00:06:47.271 EAL: Detected lcore 9 as core 16 on socket 0 00:06:47.271 EAL: Detected lcore 10 as core 17 on socket 0 00:06:47.271 EAL: Detected lcore 11 as core 18 on socket 0 00:06:47.271 EAL: Detected lcore 12 as core 19 on socket 0 00:06:47.271 EAL: Detected lcore 13 as core 20 on socket 0 00:06:47.271 EAL: Detected lcore 14 as core 24 on socket 0 00:06:47.271 EAL: Detected lcore 15 as core 25 on socket 0 00:06:47.271 EAL: Detected lcore 16 as core 26 on socket 0 00:06:47.271 EAL: Detected lcore 17 as core 27 on socket 0 00:06:47.271 EAL: Detected lcore 18 as core 0 on socket 1 00:06:47.271 EAL: Detected lcore 19 as core 1 on socket 1 00:06:47.271 EAL: Detected lcore 20 as core 2 on socket 1 00:06:47.271 EAL: Detected lcore 21 as core 3 on socket 1 00:06:47.271 EAL: Detected lcore 22 as core 4 on socket 1 00:06:47.271 EAL: Detected lcore 23 as core 8 on socket 1 00:06:47.271 EAL: Detected lcore 24 as core 9 on socket 1 00:06:47.271 EAL: Detected lcore 25 as core 10 on socket 1 00:06:47.271 EAL: Detected lcore 26 as core 11 on socket 1 00:06:47.271 EAL: Detected lcore 27 as core 16 on socket 1 00:06:47.271 EAL: Detected lcore 28 as core 17 on socket 1 00:06:47.271 EAL: Detected lcore 29 as core 18 on socket 1 00:06:47.271 EAL: Detected lcore 30 as core 19 on socket 1 00:06:47.271 EAL: Detected lcore 31 as core 20 on socket 1 00:06:47.271 EAL: Detected lcore 32 as core 24 on socket 1 00:06:47.271 EAL: Detected lcore 33 as core 25 on socket 1 00:06:47.271 EAL: Detected lcore 34 as core 26 on socket 1 00:06:47.271 EAL: Detected lcore 35 as core 27 on socket 1 00:06:47.271 EAL: Detected lcore 36 as core 0 on socket 0 00:06:47.271 EAL: Detected lcore 37 as core 1 on socket 0 00:06:47.271 EAL: Detected lcore 38 as core 2 on socket 0 00:06:47.271 EAL: Detected lcore 39 as core 3 on socket 0 00:06:47.271 EAL: Detected lcore 40 as core 4 on socket 0 00:06:47.271 EAL: Detected lcore 41 as core 8 on socket 0 00:06:47.271 EAL: Detected lcore 42 as core 9 on socket 0 00:06:47.271 EAL: Detected lcore 43 as core 10 on socket 0 00:06:47.271 EAL: Detected lcore 44 as core 11 on socket 0 00:06:47.271 EAL: Detected lcore 45 as core 16 on socket 0 00:06:47.271 EAL: Detected lcore 46 as core 17 on socket 0 00:06:47.271 EAL: Detected lcore 47 as core 18 on socket 0 00:06:47.271 EAL: Detected lcore 48 as core 19 on socket 0 00:06:47.271 EAL: Detected lcore 49 as core 20 on socket 0 00:06:47.271 EAL: Detected lcore 50 as core 24 on socket 0 00:06:47.271 EAL: Detected lcore 51 as core 25 on socket 0 00:06:47.272 EAL: Detected lcore 52 as core 26 on socket 0 00:06:47.272 EAL: Detected lcore 53 as core 27 on socket 0 00:06:47.272 EAL: Detected lcore 54 as core 0 on socket 1 00:06:47.272 EAL: Detected lcore 55 as core 1 on socket 1 00:06:47.272 EAL: Detected lcore 56 as core 2 on socket 1 00:06:47.272 EAL: Detected lcore 57 as core 3 on socket 1 00:06:47.272 EAL: Detected lcore 58 as core 4 on socket 1 00:06:47.272 EAL: Detected lcore 59 as core 8 on socket 1 00:06:47.272 EAL: Detected lcore 60 as core 9 on socket 1 00:06:47.272 EAL: Detected lcore 61 as core 10 on socket 1 00:06:47.272 EAL: Detected lcore 62 as core 11 on socket 1 00:06:47.272 EAL: Detected lcore 63 as core 16 on socket 1 00:06:47.272 EAL: Detected lcore 64 as core 17 on socket 1 00:06:47.272 EAL: Detected lcore 65 as core 18 on socket 1 00:06:47.272 EAL: Detected lcore 66 as core 19 on socket 1 00:06:47.272 EAL: Detected lcore 67 as core 20 on socket 1 00:06:47.272 EAL: Detected lcore 68 as core 24 on socket 1 00:06:47.272 EAL: Detected lcore 69 as core 25 on socket 1 00:06:47.272 EAL: Detected lcore 70 as core 26 on socket 1 00:06:47.272 EAL: Detected lcore 71 as core 27 on socket 1 00:06:47.272 EAL: Maximum logical cores by configuration: 128 00:06:47.272 EAL: Detected CPU lcores: 72 00:06:47.272 EAL: Detected NUMA nodes: 2 00:06:47.272 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:47.272 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:47.272 EAL: Checking presence of .so 'librte_eal.so' 00:06:47.272 EAL: Detected static linkage of DPDK 00:06:47.272 EAL: No shared files mode enabled, IPC will be disabled 00:06:47.272 EAL: Bus pci wants IOVA as 'DC' 00:06:47.272 EAL: Buses did not request a specific IOVA mode. 00:06:47.272 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:47.272 EAL: Selected IOVA mode 'VA' 00:06:47.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.272 EAL: Probing VFIO support... 00:06:47.272 EAL: IOMMU type 1 (Type 1) is supported 00:06:47.272 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:47.272 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:47.272 EAL: VFIO support initialized 00:06:47.272 EAL: Ask a virtual area of 0x2e000 bytes 00:06:47.272 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:47.272 EAL: Setting up physically contiguous memory... 00:06:47.272 EAL: Setting maximum number of open files to 524288 00:06:47.272 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:47.272 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:47.272 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:47.272 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:47.272 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.272 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:47.272 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:47.272 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.272 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:47.272 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:47.272 EAL: Hugepages will be freed exactly as allocated. 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: TSC frequency is ~2300000 KHz 00:06:47.272 EAL: Main lcore 0 is ready (tid=7fe6d373aa00;cpuset=[0]) 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 0 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 2MB 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Mem event callback 'spdk:(nil)' registered 00:06:47.272 00:06:47.272 00:06:47.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.272 http://cunit.sourceforge.net/ 00:06:47.272 00:06:47.272 00:06:47.272 Suite: components_suite 00:06:47.272 Test: vtophys_malloc_test ...passed 00:06:47.272 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 4MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 4MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 6MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 6MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 10MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 10MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 18MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 18MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 34MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 34MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 66MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 66MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 130MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was shrunk by 130MB 00:06:47.272 EAL: Trying to obtain current memory policy. 00:06:47.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.272 EAL: Restoring previous memory policy: 4 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.272 EAL: request: mp_malloc_sync 00:06:47.272 EAL: No shared files mode enabled, IPC is disabled 00:06:47.272 EAL: Heap on socket 0 was expanded by 258MB 00:06:47.272 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.531 EAL: request: mp_malloc_sync 00:06:47.531 EAL: No shared files mode enabled, IPC is disabled 00:06:47.531 EAL: Heap on socket 0 was shrunk by 258MB 00:06:47.531 EAL: Trying to obtain current memory policy. 00:06:47.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.531 EAL: Restoring previous memory policy: 4 00:06:47.531 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.531 EAL: request: mp_malloc_sync 00:06:47.531 EAL: No shared files mode enabled, IPC is disabled 00:06:47.531 EAL: Heap on socket 0 was expanded by 514MB 00:06:47.531 EAL: Calling mem event callback 'spdk:(nil)' 00:06:47.789 EAL: request: mp_malloc_sync 00:06:47.789 EAL: No shared files mode enabled, IPC is disabled 00:06:47.789 EAL: Heap on socket 0 was shrunk by 514MB 00:06:47.789 EAL: Trying to obtain current memory policy. 00:06:47.789 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.047 EAL: Restoring previous memory policy: 4 00:06:48.047 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.047 EAL: request: mp_malloc_sync 00:06:48.047 EAL: No shared files mode enabled, IPC is disabled 00:06:48.047 EAL: Heap on socket 0 was expanded by 1026MB 00:06:48.047 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.305 EAL: request: mp_malloc_sync 00:06:48.305 EAL: No shared files mode enabled, IPC is disabled 00:06:48.305 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:48.305 passed 00:06:48.305 00:06:48.305 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.305 suites 1 1 n/a 0 0 00:06:48.305 tests 2 2 2 0 0 00:06:48.305 asserts 497 497 497 0 n/a 00:06:48.305 00:06:48.305 Elapsed time = 1.106 seconds 00:06:48.305 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.305 EAL: request: mp_malloc_sync 00:06:48.305 EAL: No shared files mode enabled, IPC is disabled 00:06:48.305 EAL: Heap on socket 0 was shrunk by 2MB 00:06:48.305 EAL: No shared files mode enabled, IPC is disabled 00:06:48.305 EAL: No shared files mode enabled, IPC is disabled 00:06:48.305 EAL: No shared files mode enabled, IPC is disabled 00:06:48.305 00:06:48.305 real 0m1.235s 00:06:48.305 user 0m0.713s 00:06:48.305 sys 0m0.493s 00:06:48.305 12:23:43 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.305 12:23:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:48.305 ************************************ 00:06:48.305 END TEST env_vtophys 00:06:48.305 ************************************ 00:06:48.305 12:23:43 env -- common/autotest_common.sh@1142 -- # return 0 00:06:48.305 12:23:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:48.305 12:23:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.305 12:23:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.305 12:23:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.305 ************************************ 00:06:48.305 START TEST env_pci 00:06:48.305 ************************************ 00:06:48.305 12:23:43 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:48.305 00:06:48.305 00:06:48.305 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.305 http://cunit.sourceforge.net/ 00:06:48.305 00:06:48.305 00:06:48.305 Suite: pci 00:06:48.305 Test: pci_hook ...[2024-07-15 12:23:43.396846] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4140203 has claimed it 00:06:48.562 EAL: Cannot find device (10000:00:01.0) 00:06:48.562 EAL: Failed to attach device on primary process 00:06:48.562 passed 00:06:48.562 00:06:48.562 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.562 suites 1 1 n/a 0 0 00:06:48.562 tests 1 1 1 0 0 00:06:48.562 asserts 25 25 25 0 n/a 00:06:48.562 00:06:48.562 Elapsed time = 0.037 seconds 00:06:48.562 00:06:48.562 real 0m0.057s 00:06:48.562 user 0m0.015s 00:06:48.562 sys 0m0.042s 00:06:48.562 12:23:43 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.562 12:23:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:48.562 ************************************ 00:06:48.562 END TEST env_pci 00:06:48.562 ************************************ 00:06:48.562 12:23:43 env -- common/autotest_common.sh@1142 -- # return 0 00:06:48.562 12:23:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:48.562 12:23:43 env -- env/env.sh@15 -- # uname 00:06:48.562 12:23:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:48.562 12:23:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:48.563 12:23:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:48.563 12:23:43 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:48.563 12:23:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.563 12:23:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.563 ************************************ 00:06:48.563 START TEST env_dpdk_post_init 00:06:48.563 ************************************ 00:06:48.563 12:23:43 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:48.563 EAL: Detected CPU lcores: 72 00:06:48.563 EAL: Detected NUMA nodes: 2 00:06:48.563 EAL: Detected static linkage of DPDK 00:06:48.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:48.563 EAL: Selected IOVA mode 'VA' 00:06:48.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.563 EAL: VFIO support initialized 00:06:48.563 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:48.563 EAL: Using IOMMU type 1 (Type 1) 00:06:49.498 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:54.761 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:54.761 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:55.019 Starting DPDK initialization... 00:06:55.019 Starting SPDK post initialization... 00:06:55.019 SPDK NVMe probe 00:06:55.019 Attaching to 0000:1a:00.0 00:06:55.019 Attached to 0000:1a:00.0 00:06:55.019 Cleaning up... 00:06:55.019 00:06:55.019 real 0m6.495s 00:06:55.019 user 0m4.982s 00:06:55.019 sys 0m0.762s 00:06:55.019 12:23:50 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.019 12:23:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:55.019 ************************************ 00:06:55.019 END TEST env_dpdk_post_init 00:06:55.019 ************************************ 00:06:55.019 12:23:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:55.019 12:23:50 env -- env/env.sh@26 -- # uname 00:06:55.019 12:23:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:55.019 12:23:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.019 12:23:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.019 12:23:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.019 12:23:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.019 ************************************ 00:06:55.019 START TEST env_mem_callbacks 00:06:55.019 ************************************ 00:06:55.019 12:23:50 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.019 EAL: Detected CPU lcores: 72 00:06:55.019 EAL: Detected NUMA nodes: 2 00:06:55.019 EAL: Detected static linkage of DPDK 00:06:55.019 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:55.278 EAL: Selected IOVA mode 'VA' 00:06:55.278 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.278 EAL: VFIO support initialized 00:06:55.278 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:55.278 00:06:55.278 00:06:55.278 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.278 http://cunit.sourceforge.net/ 00:06:55.278 00:06:55.278 00:06:55.278 Suite: memory 00:06:55.278 Test: test ... 00:06:55.278 register 0x200000200000 2097152 00:06:55.278 malloc 3145728 00:06:55.278 register 0x200000400000 4194304 00:06:55.278 buf 0x200000500000 len 3145728 PASSED 00:06:55.278 malloc 64 00:06:55.278 buf 0x2000004fff40 len 64 PASSED 00:06:55.278 malloc 4194304 00:06:55.278 register 0x200000800000 6291456 00:06:55.278 buf 0x200000a00000 len 4194304 PASSED 00:06:55.278 free 0x200000500000 3145728 00:06:55.278 free 0x2000004fff40 64 00:06:55.278 unregister 0x200000400000 4194304 PASSED 00:06:55.278 free 0x200000a00000 4194304 00:06:55.278 unregister 0x200000800000 6291456 PASSED 00:06:55.278 malloc 8388608 00:06:55.278 register 0x200000400000 10485760 00:06:55.278 buf 0x200000600000 len 8388608 PASSED 00:06:55.278 free 0x200000600000 8388608 00:06:55.278 unregister 0x200000400000 10485760 PASSED 00:06:55.278 passed 00:06:55.278 00:06:55.278 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.278 suites 1 1 n/a 0 0 00:06:55.278 tests 1 1 1 0 0 00:06:55.278 asserts 15 15 15 0 n/a 00:06:55.278 00:06:55.278 Elapsed time = 0.005 seconds 00:06:55.278 00:06:55.278 real 0m0.066s 00:06:55.278 user 0m0.017s 00:06:55.278 sys 0m0.049s 00:06:55.278 12:23:50 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.278 12:23:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:55.278 ************************************ 00:06:55.278 END TEST env_mem_callbacks 00:06:55.278 ************************************ 00:06:55.278 12:23:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:55.278 00:06:55.278 real 0m8.440s 00:06:55.278 user 0m5.999s 00:06:55.278 sys 0m1.696s 00:06:55.278 12:23:50 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.278 12:23:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.278 ************************************ 00:06:55.278 END TEST env 00:06:55.278 ************************************ 00:06:55.278 12:23:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.278 12:23:50 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:55.278 12:23:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.278 12:23:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.278 12:23:50 -- common/autotest_common.sh@10 -- # set +x 00:06:55.278 ************************************ 00:06:55.278 START TEST rpc 00:06:55.278 ************************************ 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:55.278 * Looking for test storage... 00:06:55.278 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:55.278 12:23:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4141194 00:06:55.278 12:23:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.278 12:23:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:55.278 12:23:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4141194 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@829 -- # '[' -z 4141194 ']' 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.278 12:23:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.537 [2024-07-15 12:23:50.422572] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:06:55.537 [2024-07-15 12:23:50.422639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141194 ] 00:06:55.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.537 [2024-07-15 12:23:50.496745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.537 [2024-07-15 12:23:50.577571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:55.537 [2024-07-15 12:23:50.577617] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4141194' to capture a snapshot of events at runtime. 00:06:55.537 [2024-07-15 12:23:50.577627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.537 [2024-07-15 12:23:50.577636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.537 [2024-07-15 12:23:50.577643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4141194 for offline analysis/debug. 00:06:55.537 [2024-07-15 12:23:50.577673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.472 12:23:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.472 12:23:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.472 12:23:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:56.472 12:23:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:56.472 12:23:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:56.472 12:23:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:56.472 12:23:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.472 12:23:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.472 12:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.472 ************************************ 00:06:56.472 START TEST rpc_integrity 00:06:56.472 ************************************ 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:56.472 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:56.472 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:56.473 { 00:06:56.473 "name": "Malloc0", 00:06:56.473 "aliases": [ 00:06:56.473 "b1223cab-0721-4f9b-9c48-dd143c1cdfe6" 00:06:56.473 ], 00:06:56.473 "product_name": "Malloc disk", 00:06:56.473 "block_size": 512, 00:06:56.473 "num_blocks": 16384, 00:06:56.473 "uuid": "b1223cab-0721-4f9b-9c48-dd143c1cdfe6", 00:06:56.473 "assigned_rate_limits": { 00:06:56.473 "rw_ios_per_sec": 0, 00:06:56.473 "rw_mbytes_per_sec": 0, 00:06:56.473 "r_mbytes_per_sec": 0, 00:06:56.473 "w_mbytes_per_sec": 0 00:06:56.473 }, 00:06:56.473 "claimed": false, 00:06:56.473 "zoned": false, 00:06:56.473 "supported_io_types": { 00:06:56.473 "read": true, 00:06:56.473 "write": true, 00:06:56.473 "unmap": true, 00:06:56.473 "flush": true, 00:06:56.473 "reset": true, 00:06:56.473 "nvme_admin": false, 00:06:56.473 "nvme_io": false, 00:06:56.473 "nvme_io_md": false, 00:06:56.473 "write_zeroes": true, 00:06:56.473 "zcopy": true, 00:06:56.473 "get_zone_info": false, 00:06:56.473 "zone_management": false, 00:06:56.473 "zone_append": false, 00:06:56.473 "compare": false, 00:06:56.473 "compare_and_write": false, 00:06:56.473 "abort": true, 00:06:56.473 "seek_hole": false, 00:06:56.473 "seek_data": false, 00:06:56.473 "copy": true, 00:06:56.473 "nvme_iov_md": false 00:06:56.473 }, 00:06:56.473 "memory_domains": [ 00:06:56.473 { 00:06:56.473 "dma_device_id": "system", 00:06:56.473 "dma_device_type": 1 00:06:56.473 }, 00:06:56.473 { 00:06:56.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.473 "dma_device_type": 2 00:06:56.473 } 00:06:56.473 ], 00:06:56.473 "driver_specific": {} 00:06:56.473 } 00:06:56.473 ]' 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 [2024-07-15 12:23:51.441426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:56.473 [2024-07-15 12:23:51.441466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.473 [2024-07-15 12:23:51.441486] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4475770 00:06:56.473 [2024-07-15 12:23:51.441497] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.473 [2024-07-15 12:23:51.442377] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.473 [2024-07-15 12:23:51.442401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:56.473 Passthru0 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:56.473 { 00:06:56.473 "name": "Malloc0", 00:06:56.473 "aliases": [ 00:06:56.473 "b1223cab-0721-4f9b-9c48-dd143c1cdfe6" 00:06:56.473 ], 00:06:56.473 "product_name": "Malloc disk", 00:06:56.473 "block_size": 512, 00:06:56.473 "num_blocks": 16384, 00:06:56.473 "uuid": "b1223cab-0721-4f9b-9c48-dd143c1cdfe6", 00:06:56.473 "assigned_rate_limits": { 00:06:56.473 "rw_ios_per_sec": 0, 00:06:56.473 "rw_mbytes_per_sec": 0, 00:06:56.473 "r_mbytes_per_sec": 0, 00:06:56.473 "w_mbytes_per_sec": 0 00:06:56.473 }, 00:06:56.473 "claimed": true, 00:06:56.473 "claim_type": "exclusive_write", 00:06:56.473 "zoned": false, 00:06:56.473 "supported_io_types": { 00:06:56.473 "read": true, 00:06:56.473 "write": true, 00:06:56.473 "unmap": true, 00:06:56.473 "flush": true, 00:06:56.473 "reset": true, 00:06:56.473 "nvme_admin": false, 00:06:56.473 "nvme_io": false, 00:06:56.473 "nvme_io_md": false, 00:06:56.473 "write_zeroes": true, 00:06:56.473 "zcopy": true, 00:06:56.473 "get_zone_info": false, 00:06:56.473 "zone_management": false, 00:06:56.473 "zone_append": false, 00:06:56.473 "compare": false, 00:06:56.473 "compare_and_write": false, 00:06:56.473 "abort": true, 00:06:56.473 "seek_hole": false, 00:06:56.473 "seek_data": false, 00:06:56.473 "copy": true, 00:06:56.473 "nvme_iov_md": false 00:06:56.473 }, 00:06:56.473 "memory_domains": [ 00:06:56.473 { 00:06:56.473 "dma_device_id": "system", 00:06:56.473 "dma_device_type": 1 00:06:56.473 }, 00:06:56.473 { 00:06:56.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.473 "dma_device_type": 2 00:06:56.473 } 00:06:56.473 ], 00:06:56.473 "driver_specific": {} 00:06:56.473 }, 00:06:56.473 { 00:06:56.473 "name": "Passthru0", 00:06:56.473 "aliases": [ 00:06:56.473 "4b2bf6ac-04d2-58d2-bb24-52efa622ca42" 00:06:56.473 ], 00:06:56.473 "product_name": "passthru", 00:06:56.473 "block_size": 512, 00:06:56.473 "num_blocks": 16384, 00:06:56.473 "uuid": "4b2bf6ac-04d2-58d2-bb24-52efa622ca42", 00:06:56.473 "assigned_rate_limits": { 00:06:56.473 "rw_ios_per_sec": 0, 00:06:56.473 "rw_mbytes_per_sec": 0, 00:06:56.473 "r_mbytes_per_sec": 0, 00:06:56.473 "w_mbytes_per_sec": 0 00:06:56.473 }, 00:06:56.473 "claimed": false, 00:06:56.473 "zoned": false, 00:06:56.473 "supported_io_types": { 00:06:56.473 "read": true, 00:06:56.473 "write": true, 00:06:56.473 "unmap": true, 00:06:56.473 "flush": true, 00:06:56.473 "reset": true, 00:06:56.473 "nvme_admin": false, 00:06:56.473 "nvme_io": false, 00:06:56.473 "nvme_io_md": false, 00:06:56.473 "write_zeroes": true, 00:06:56.473 "zcopy": true, 00:06:56.473 "get_zone_info": false, 00:06:56.473 "zone_management": false, 00:06:56.473 "zone_append": false, 00:06:56.473 "compare": false, 00:06:56.473 "compare_and_write": false, 00:06:56.473 "abort": true, 00:06:56.473 "seek_hole": false, 00:06:56.473 "seek_data": false, 00:06:56.473 "copy": true, 00:06:56.473 "nvme_iov_md": false 00:06:56.473 }, 00:06:56.473 "memory_domains": [ 00:06:56.473 { 00:06:56.473 "dma_device_id": "system", 00:06:56.473 "dma_device_type": 1 00:06:56.473 }, 00:06:56.473 { 00:06:56.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.473 "dma_device_type": 2 00:06:56.473 } 00:06:56.473 ], 00:06:56.473 "driver_specific": { 00:06:56.473 "passthru": { 00:06:56.473 "name": "Passthru0", 00:06:56.473 "base_bdev_name": "Malloc0" 00:06:56.473 } 00:06:56.473 } 00:06:56.473 } 00:06:56.473 ]' 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:56.473 12:23:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:56.473 00:06:56.473 real 0m0.286s 00:06:56.473 user 0m0.166s 00:06:56.473 sys 0m0.055s 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.473 12:23:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:56.473 ************************************ 00:06:56.473 END TEST rpc_integrity 00:06:56.473 ************************************ 00:06:56.732 12:23:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:56.732 12:23:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:56.732 12:23:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.732 12:23:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.732 12:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.732 ************************************ 00:06:56.732 START TEST rpc_plugins 00:06:56.732 ************************************ 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:56.732 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.732 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:56.732 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.732 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.732 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:56.732 { 00:06:56.732 "name": "Malloc1", 00:06:56.732 "aliases": [ 00:06:56.732 "74e13702-4f16-471e-beac-44be60267659" 00:06:56.732 ], 00:06:56.732 "product_name": "Malloc disk", 00:06:56.732 "block_size": 4096, 00:06:56.732 "num_blocks": 256, 00:06:56.732 "uuid": "74e13702-4f16-471e-beac-44be60267659", 00:06:56.732 "assigned_rate_limits": { 00:06:56.732 "rw_ios_per_sec": 0, 00:06:56.732 "rw_mbytes_per_sec": 0, 00:06:56.732 "r_mbytes_per_sec": 0, 00:06:56.732 "w_mbytes_per_sec": 0 00:06:56.732 }, 00:06:56.732 "claimed": false, 00:06:56.732 "zoned": false, 00:06:56.732 "supported_io_types": { 00:06:56.732 "read": true, 00:06:56.732 "write": true, 00:06:56.732 "unmap": true, 00:06:56.732 "flush": true, 00:06:56.732 "reset": true, 00:06:56.732 "nvme_admin": false, 00:06:56.732 "nvme_io": false, 00:06:56.733 "nvme_io_md": false, 00:06:56.733 "write_zeroes": true, 00:06:56.733 "zcopy": true, 00:06:56.733 "get_zone_info": false, 00:06:56.733 "zone_management": false, 00:06:56.733 "zone_append": false, 00:06:56.733 "compare": false, 00:06:56.733 "compare_and_write": false, 00:06:56.733 "abort": true, 00:06:56.733 "seek_hole": false, 00:06:56.733 "seek_data": false, 00:06:56.733 "copy": true, 00:06:56.733 "nvme_iov_md": false 00:06:56.733 }, 00:06:56.733 "memory_domains": [ 00:06:56.733 { 00:06:56.733 "dma_device_id": "system", 00:06:56.733 "dma_device_type": 1 00:06:56.733 }, 00:06:56.733 { 00:06:56.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.733 "dma_device_type": 2 00:06:56.733 } 00:06:56.733 ], 00:06:56.733 "driver_specific": {} 00:06:56.733 } 00:06:56.733 ]' 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:56.733 12:23:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:56.733 00:06:56.733 real 0m0.143s 00:06:56.733 user 0m0.083s 00:06:56.733 sys 0m0.025s 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.733 12:23:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:56.733 ************************************ 00:06:56.733 END TEST rpc_plugins 00:06:56.733 ************************************ 00:06:56.733 12:23:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:56.733 12:23:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:56.733 12:23:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.733 12:23:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.733 12:23:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.991 ************************************ 00:06:56.991 START TEST rpc_trace_cmd_test 00:06:56.991 ************************************ 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:56.991 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4141194", 00:06:56.991 "tpoint_group_mask": "0x8", 00:06:56.991 "iscsi_conn": { 00:06:56.991 "mask": "0x2", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "scsi": { 00:06:56.991 "mask": "0x4", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "bdev": { 00:06:56.991 "mask": "0x8", 00:06:56.991 "tpoint_mask": "0xffffffffffffffff" 00:06:56.991 }, 00:06:56.991 "nvmf_rdma": { 00:06:56.991 "mask": "0x10", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "nvmf_tcp": { 00:06:56.991 "mask": "0x20", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "ftl": { 00:06:56.991 "mask": "0x40", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "blobfs": { 00:06:56.991 "mask": "0x80", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "dsa": { 00:06:56.991 "mask": "0x200", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "thread": { 00:06:56.991 "mask": "0x400", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "nvme_pcie": { 00:06:56.991 "mask": "0x800", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "iaa": { 00:06:56.991 "mask": "0x1000", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "nvme_tcp": { 00:06:56.991 "mask": "0x2000", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "bdev_nvme": { 00:06:56.991 "mask": "0x4000", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 }, 00:06:56.991 "sock": { 00:06:56.991 "mask": "0x8000", 00:06:56.991 "tpoint_mask": "0x0" 00:06:56.991 } 00:06:56.991 }' 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:56.991 12:23:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:56.991 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:56.991 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:56.991 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:56.992 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:56.992 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:56.992 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:57.249 12:23:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:57.249 00:06:57.249 real 0m0.235s 00:06:57.249 user 0m0.188s 00:06:57.249 sys 0m0.038s 00:06:57.249 12:23:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.249 12:23:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.249 ************************************ 00:06:57.249 END TEST rpc_trace_cmd_test 00:06:57.249 ************************************ 00:06:57.249 12:23:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:57.249 12:23:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:57.249 12:23:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:57.249 12:23:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:57.249 12:23:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.249 12:23:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.249 12:23:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 ************************************ 00:06:57.250 START TEST rpc_daemon_integrity 00:06:57.250 ************************************ 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:57.250 { 00:06:57.250 "name": "Malloc2", 00:06:57.250 "aliases": [ 00:06:57.250 "0d6abcec-1e01-4be4-8e0d-092ba4ce6f05" 00:06:57.250 ], 00:06:57.250 "product_name": "Malloc disk", 00:06:57.250 "block_size": 512, 00:06:57.250 "num_blocks": 16384, 00:06:57.250 "uuid": "0d6abcec-1e01-4be4-8e0d-092ba4ce6f05", 00:06:57.250 "assigned_rate_limits": { 00:06:57.250 "rw_ios_per_sec": 0, 00:06:57.250 "rw_mbytes_per_sec": 0, 00:06:57.250 "r_mbytes_per_sec": 0, 00:06:57.250 "w_mbytes_per_sec": 0 00:06:57.250 }, 00:06:57.250 "claimed": false, 00:06:57.250 "zoned": false, 00:06:57.250 "supported_io_types": { 00:06:57.250 "read": true, 00:06:57.250 "write": true, 00:06:57.250 "unmap": true, 00:06:57.250 "flush": true, 00:06:57.250 "reset": true, 00:06:57.250 "nvme_admin": false, 00:06:57.250 "nvme_io": false, 00:06:57.250 "nvme_io_md": false, 00:06:57.250 "write_zeroes": true, 00:06:57.250 "zcopy": true, 00:06:57.250 "get_zone_info": false, 00:06:57.250 "zone_management": false, 00:06:57.250 "zone_append": false, 00:06:57.250 "compare": false, 00:06:57.250 "compare_and_write": false, 00:06:57.250 "abort": true, 00:06:57.250 "seek_hole": false, 00:06:57.250 "seek_data": false, 00:06:57.250 "copy": true, 00:06:57.250 "nvme_iov_md": false 00:06:57.250 }, 00:06:57.250 "memory_domains": [ 00:06:57.250 { 00:06:57.250 "dma_device_id": "system", 00:06:57.250 "dma_device_type": 1 00:06:57.250 }, 00:06:57.250 { 00:06:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.250 "dma_device_type": 2 00:06:57.250 } 00:06:57.250 ], 00:06:57.250 "driver_specific": {} 00:06:57.250 } 00:06:57.250 ]' 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 [2024-07-15 12:23:52.355802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:57.250 [2024-07-15 12:23:52.355837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.250 [2024-07-15 12:23:52.355855] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4613710 00:06:57.250 [2024-07-15 12:23:52.355866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.250 [2024-07-15 12:23:52.356599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.250 [2024-07-15 12:23:52.356620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:57.250 Passthru0 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.250 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:57.508 { 00:06:57.508 "name": "Malloc2", 00:06:57.508 "aliases": [ 00:06:57.508 "0d6abcec-1e01-4be4-8e0d-092ba4ce6f05" 00:06:57.508 ], 00:06:57.508 "product_name": "Malloc disk", 00:06:57.508 "block_size": 512, 00:06:57.508 "num_blocks": 16384, 00:06:57.508 "uuid": "0d6abcec-1e01-4be4-8e0d-092ba4ce6f05", 00:06:57.508 "assigned_rate_limits": { 00:06:57.508 "rw_ios_per_sec": 0, 00:06:57.508 "rw_mbytes_per_sec": 0, 00:06:57.508 "r_mbytes_per_sec": 0, 00:06:57.508 "w_mbytes_per_sec": 0 00:06:57.508 }, 00:06:57.508 "claimed": true, 00:06:57.508 "claim_type": "exclusive_write", 00:06:57.508 "zoned": false, 00:06:57.508 "supported_io_types": { 00:06:57.508 "read": true, 00:06:57.508 "write": true, 00:06:57.508 "unmap": true, 00:06:57.508 "flush": true, 00:06:57.508 "reset": true, 00:06:57.508 "nvme_admin": false, 00:06:57.508 "nvme_io": false, 00:06:57.508 "nvme_io_md": false, 00:06:57.508 "write_zeroes": true, 00:06:57.508 "zcopy": true, 00:06:57.508 "get_zone_info": false, 00:06:57.508 "zone_management": false, 00:06:57.508 "zone_append": false, 00:06:57.508 "compare": false, 00:06:57.508 "compare_and_write": false, 00:06:57.508 "abort": true, 00:06:57.508 "seek_hole": false, 00:06:57.508 "seek_data": false, 00:06:57.508 "copy": true, 00:06:57.508 "nvme_iov_md": false 00:06:57.508 }, 00:06:57.508 "memory_domains": [ 00:06:57.508 { 00:06:57.508 "dma_device_id": "system", 00:06:57.508 "dma_device_type": 1 00:06:57.508 }, 00:06:57.508 { 00:06:57.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.508 "dma_device_type": 2 00:06:57.508 } 00:06:57.508 ], 00:06:57.508 "driver_specific": {} 00:06:57.508 }, 00:06:57.508 { 00:06:57.508 "name": "Passthru0", 00:06:57.508 "aliases": [ 00:06:57.508 "c09d903c-2149-5d62-b40a-f7d817b6c14e" 00:06:57.508 ], 00:06:57.508 "product_name": "passthru", 00:06:57.508 "block_size": 512, 00:06:57.508 "num_blocks": 16384, 00:06:57.508 "uuid": "c09d903c-2149-5d62-b40a-f7d817b6c14e", 00:06:57.508 "assigned_rate_limits": { 00:06:57.508 "rw_ios_per_sec": 0, 00:06:57.508 "rw_mbytes_per_sec": 0, 00:06:57.508 "r_mbytes_per_sec": 0, 00:06:57.508 "w_mbytes_per_sec": 0 00:06:57.508 }, 00:06:57.508 "claimed": false, 00:06:57.508 "zoned": false, 00:06:57.508 "supported_io_types": { 00:06:57.508 "read": true, 00:06:57.508 "write": true, 00:06:57.508 "unmap": true, 00:06:57.508 "flush": true, 00:06:57.508 "reset": true, 00:06:57.508 "nvme_admin": false, 00:06:57.508 "nvme_io": false, 00:06:57.508 "nvme_io_md": false, 00:06:57.508 "write_zeroes": true, 00:06:57.508 "zcopy": true, 00:06:57.508 "get_zone_info": false, 00:06:57.508 "zone_management": false, 00:06:57.508 "zone_append": false, 00:06:57.508 "compare": false, 00:06:57.508 "compare_and_write": false, 00:06:57.508 "abort": true, 00:06:57.508 "seek_hole": false, 00:06:57.508 "seek_data": false, 00:06:57.508 "copy": true, 00:06:57.508 "nvme_iov_md": false 00:06:57.508 }, 00:06:57.508 "memory_domains": [ 00:06:57.508 { 00:06:57.508 "dma_device_id": "system", 00:06:57.508 "dma_device_type": 1 00:06:57.508 }, 00:06:57.508 { 00:06:57.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.508 "dma_device_type": 2 00:06:57.508 } 00:06:57.508 ], 00:06:57.508 "driver_specific": { 00:06:57.508 "passthru": { 00:06:57.508 "name": "Passthru0", 00:06:57.508 "base_bdev_name": "Malloc2" 00:06:57.508 } 00:06:57.508 } 00:06:57.508 } 00:06:57.508 ]' 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.508 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:57.509 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:57.509 12:23:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:57.509 00:06:57.509 real 0m0.292s 00:06:57.509 user 0m0.172s 00:06:57.509 sys 0m0.055s 00:06:57.509 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.509 12:23:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.509 ************************************ 00:06:57.509 END TEST rpc_daemon_integrity 00:06:57.509 ************************************ 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:57.509 12:23:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:57.509 12:23:52 rpc -- rpc/rpc.sh@84 -- # killprocess 4141194 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@948 -- # '[' -z 4141194 ']' 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@952 -- # kill -0 4141194 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4141194 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4141194' 00:06:57.509 killing process with pid 4141194 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@967 -- # kill 4141194 00:06:57.509 12:23:52 rpc -- common/autotest_common.sh@972 -- # wait 4141194 00:06:58.074 00:06:58.074 real 0m2.651s 00:06:58.074 user 0m3.318s 00:06:58.074 sys 0m0.855s 00:06:58.074 12:23:52 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.074 12:23:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.074 ************************************ 00:06:58.074 END TEST rpc 00:06:58.074 ************************************ 00:06:58.074 12:23:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.074 12:23:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:58.074 12:23:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.074 12:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.074 12:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:58.074 ************************************ 00:06:58.074 START TEST skip_rpc 00:06:58.074 ************************************ 00:06:58.074 12:23:53 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:58.074 * Looking for test storage... 00:06:58.074 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:58.074 12:23:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:58.074 12:23:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:58.074 12:23:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:58.074 12:23:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.074 12:23:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.074 12:23:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.074 ************************************ 00:06:58.074 START TEST skip_rpc 00:06:58.074 ************************************ 00:06:58.074 12:23:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:58.074 12:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:58.074 12:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4141750 00:06:58.074 12:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.074 12:23:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:58.074 [2024-07-15 12:23:53.183332] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:06:58.074 [2024-07-15 12:23:53.183395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141750 ] 00:06:58.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.332 [2024-07-15 12:23:53.258192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.332 [2024-07-15 12:23:53.342580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4141750 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 4141750 ']' 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 4141750 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4141750 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4141750' 00:07:03.595 killing process with pid 4141750 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 4141750 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 4141750 00:07:03.595 00:07:03.595 real 0m5.387s 00:07:03.595 user 0m5.124s 00:07:03.595 sys 0m0.297s 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.595 12:23:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.595 ************************************ 00:07:03.595 END TEST skip_rpc 00:07:03.595 ************************************ 00:07:03.595 12:23:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:03.595 12:23:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:03.595 12:23:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.595 12:23:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.595 12:23:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.595 ************************************ 00:07:03.595 START TEST skip_rpc_with_json 00:07:03.595 ************************************ 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4142541 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4142541 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 4142541 ']' 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.595 12:23:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.596 [2024-07-15 12:23:58.663113] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:03.596 [2024-07-15 12:23:58.663175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142541 ] 00:07:03.596 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.856 [2024-07-15 12:23:58.738393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.856 [2024-07-15 12:23:58.819827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.422 [2024-07-15 12:23:59.498825] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:04.422 request: 00:07:04.422 { 00:07:04.422 "trtype": "tcp", 00:07:04.422 "method": "nvmf_get_transports", 00:07:04.422 "req_id": 1 00:07:04.422 } 00:07:04.422 Got JSON-RPC error response 00:07:04.422 response: 00:07:04.422 { 00:07:04.422 "code": -19, 00:07:04.422 "message": "No such device" 00:07:04.422 } 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.422 [2024-07-15 12:23:59.506915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.422 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.680 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.680 12:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:04.680 { 00:07:04.680 "subsystems": [ 00:07:04.680 { 00:07:04.680 "subsystem": "scheduler", 00:07:04.680 "config": [ 00:07:04.680 { 00:07:04.680 "method": "framework_set_scheduler", 00:07:04.680 "params": { 00:07:04.680 "name": "static" 00:07:04.680 } 00:07:04.680 } 00:07:04.680 ] 00:07:04.680 }, 00:07:04.680 { 00:07:04.680 "subsystem": "vmd", 00:07:04.680 "config": [] 00:07:04.680 }, 00:07:04.680 { 00:07:04.680 "subsystem": "sock", 00:07:04.680 "config": [ 00:07:04.680 { 00:07:04.680 "method": "sock_set_default_impl", 00:07:04.680 "params": { 00:07:04.680 "impl_name": "posix" 00:07:04.680 } 00:07:04.680 }, 00:07:04.680 { 00:07:04.680 "method": "sock_impl_set_options", 00:07:04.680 "params": { 00:07:04.680 "impl_name": "ssl", 00:07:04.680 "recv_buf_size": 4096, 00:07:04.680 "send_buf_size": 4096, 00:07:04.680 "enable_recv_pipe": true, 00:07:04.680 "enable_quickack": false, 00:07:04.680 "enable_placement_id": 0, 00:07:04.680 "enable_zerocopy_send_server": true, 00:07:04.680 "enable_zerocopy_send_client": false, 00:07:04.680 "zerocopy_threshold": 0, 00:07:04.680 "tls_version": 0, 00:07:04.680 "enable_ktls": false 00:07:04.680 } 00:07:04.680 }, 00:07:04.680 { 00:07:04.680 "method": "sock_impl_set_options", 00:07:04.680 "params": { 00:07:04.680 "impl_name": "posix", 00:07:04.680 "recv_buf_size": 2097152, 00:07:04.680 "send_buf_size": 2097152, 00:07:04.680 "enable_recv_pipe": true, 00:07:04.680 "enable_quickack": false, 00:07:04.680 "enable_placement_id": 0, 00:07:04.680 "enable_zerocopy_send_server": true, 00:07:04.680 "enable_zerocopy_send_client": false, 00:07:04.680 "zerocopy_threshold": 0, 00:07:04.680 "tls_version": 0, 00:07:04.680 "enable_ktls": false 00:07:04.680 } 00:07:04.680 } 00:07:04.680 ] 00:07:04.680 }, 00:07:04.680 { 00:07:04.680 "subsystem": "iobuf", 00:07:04.680 "config": [ 00:07:04.680 { 00:07:04.680 "method": "iobuf_set_options", 00:07:04.680 "params": { 00:07:04.680 "small_pool_count": 8192, 00:07:04.680 "large_pool_count": 1024, 00:07:04.680 "small_bufsize": 8192, 00:07:04.680 "large_bufsize": 135168 00:07:04.680 } 00:07:04.681 } 00:07:04.681 ] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "keyring", 00:07:04.681 "config": [] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "vfio_user_target", 00:07:04.681 "config": null 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "accel", 00:07:04.681 "config": [ 00:07:04.681 { 00:07:04.681 "method": "accel_set_options", 00:07:04.681 "params": { 00:07:04.681 "small_cache_size": 128, 00:07:04.681 "large_cache_size": 16, 00:07:04.681 "task_count": 2048, 00:07:04.681 "sequence_count": 2048, 00:07:04.681 "buf_count": 2048 00:07:04.681 } 00:07:04.681 } 00:07:04.681 ] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "bdev", 00:07:04.681 "config": [ 00:07:04.681 { 00:07:04.681 "method": "bdev_set_options", 00:07:04.681 "params": { 00:07:04.681 "bdev_io_pool_size": 65535, 00:07:04.681 "bdev_io_cache_size": 256, 00:07:04.681 "bdev_auto_examine": true, 00:07:04.681 "iobuf_small_cache_size": 128, 00:07:04.681 "iobuf_large_cache_size": 16 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "bdev_raid_set_options", 00:07:04.681 "params": { 00:07:04.681 "process_window_size_kb": 1024 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "bdev_nvme_set_options", 00:07:04.681 "params": { 00:07:04.681 "action_on_timeout": "none", 00:07:04.681 "timeout_us": 0, 00:07:04.681 "timeout_admin_us": 0, 00:07:04.681 "keep_alive_timeout_ms": 10000, 00:07:04.681 "arbitration_burst": 0, 00:07:04.681 "low_priority_weight": 0, 00:07:04.681 "medium_priority_weight": 0, 00:07:04.681 "high_priority_weight": 0, 00:07:04.681 "nvme_adminq_poll_period_us": 10000, 00:07:04.681 "nvme_ioq_poll_period_us": 0, 00:07:04.681 "io_queue_requests": 0, 00:07:04.681 "delay_cmd_submit": true, 00:07:04.681 "transport_retry_count": 4, 00:07:04.681 "bdev_retry_count": 3, 00:07:04.681 "transport_ack_timeout": 0, 00:07:04.681 "ctrlr_loss_timeout_sec": 0, 00:07:04.681 "reconnect_delay_sec": 0, 00:07:04.681 "fast_io_fail_timeout_sec": 0, 00:07:04.681 "disable_auto_failback": false, 00:07:04.681 "generate_uuids": false, 00:07:04.681 "transport_tos": 0, 00:07:04.681 "nvme_error_stat": false, 00:07:04.681 "rdma_srq_size": 0, 00:07:04.681 "io_path_stat": false, 00:07:04.681 "allow_accel_sequence": false, 00:07:04.681 "rdma_max_cq_size": 0, 00:07:04.681 "rdma_cm_event_timeout_ms": 0, 00:07:04.681 "dhchap_digests": [ 00:07:04.681 "sha256", 00:07:04.681 "sha384", 00:07:04.681 "sha512" 00:07:04.681 ], 00:07:04.681 "dhchap_dhgroups": [ 00:07:04.681 "null", 00:07:04.681 "ffdhe2048", 00:07:04.681 "ffdhe3072", 00:07:04.681 "ffdhe4096", 00:07:04.681 "ffdhe6144", 00:07:04.681 "ffdhe8192" 00:07:04.681 ] 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "bdev_nvme_set_hotplug", 00:07:04.681 "params": { 00:07:04.681 "period_us": 100000, 00:07:04.681 "enable": false 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "bdev_iscsi_set_options", 00:07:04.681 "params": { 00:07:04.681 "timeout_sec": 30 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "bdev_wait_for_examine" 00:07:04.681 } 00:07:04.681 ] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "nvmf", 00:07:04.681 "config": [ 00:07:04.681 { 00:07:04.681 "method": "nvmf_set_config", 00:07:04.681 "params": { 00:07:04.681 "discovery_filter": "match_any", 00:07:04.681 "admin_cmd_passthru": { 00:07:04.681 "identify_ctrlr": false 00:07:04.681 } 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "nvmf_set_max_subsystems", 00:07:04.681 "params": { 00:07:04.681 "max_subsystems": 1024 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "nvmf_set_crdt", 00:07:04.681 "params": { 00:07:04.681 "crdt1": 0, 00:07:04.681 "crdt2": 0, 00:07:04.681 "crdt3": 0 00:07:04.681 } 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "method": "nvmf_create_transport", 00:07:04.681 "params": { 00:07:04.681 "trtype": "TCP", 00:07:04.681 "max_queue_depth": 128, 00:07:04.681 "max_io_qpairs_per_ctrlr": 127, 00:07:04.681 "in_capsule_data_size": 4096, 00:07:04.681 "max_io_size": 131072, 00:07:04.681 "io_unit_size": 131072, 00:07:04.681 "max_aq_depth": 128, 00:07:04.681 "num_shared_buffers": 511, 00:07:04.681 "buf_cache_size": 4294967295, 00:07:04.681 "dif_insert_or_strip": false, 00:07:04.681 "zcopy": false, 00:07:04.681 "c2h_success": true, 00:07:04.681 "sock_priority": 0, 00:07:04.681 "abort_timeout_sec": 1, 00:07:04.681 "ack_timeout": 0, 00:07:04.681 "data_wr_pool_size": 0 00:07:04.681 } 00:07:04.681 } 00:07:04.681 ] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "nbd", 00:07:04.681 "config": [] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "ublk", 00:07:04.681 "config": [] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "vhost_blk", 00:07:04.681 "config": [] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "scsi", 00:07:04.681 "config": null 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "iscsi", 00:07:04.681 "config": [ 00:07:04.681 { 00:07:04.681 "method": "iscsi_set_options", 00:07:04.681 "params": { 00:07:04.681 "node_base": "iqn.2016-06.io.spdk", 00:07:04.681 "max_sessions": 128, 00:07:04.681 "max_connections_per_session": 2, 00:07:04.681 "max_queue_depth": 64, 00:07:04.681 "default_time2wait": 2, 00:07:04.681 "default_time2retain": 20, 00:07:04.681 "first_burst_length": 8192, 00:07:04.681 "immediate_data": true, 00:07:04.681 "allow_duplicated_isid": false, 00:07:04.681 "error_recovery_level": 0, 00:07:04.681 "nop_timeout": 60, 00:07:04.681 "nop_in_interval": 30, 00:07:04.681 "disable_chap": false, 00:07:04.681 "require_chap": false, 00:07:04.681 "mutual_chap": false, 00:07:04.681 "chap_group": 0, 00:07:04.681 "max_large_datain_per_connection": 64, 00:07:04.681 "max_r2t_per_connection": 4, 00:07:04.681 "pdu_pool_size": 36864, 00:07:04.681 "immediate_data_pool_size": 16384, 00:07:04.681 "data_out_pool_size": 2048 00:07:04.681 } 00:07:04.681 } 00:07:04.681 ] 00:07:04.681 }, 00:07:04.681 { 00:07:04.681 "subsystem": "vhost_scsi", 00:07:04.681 "config": [] 00:07:04.681 } 00:07:04.681 ] 00:07:04.681 } 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4142541 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4142541 ']' 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4142541 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4142541 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4142541' 00:07:04.681 killing process with pid 4142541 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4142541 00:07:04.681 12:23:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4142541 00:07:04.940 12:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4142781 00:07:04.940 12:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:04.940 12:24:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4142781 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4142781 ']' 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4142781 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4142781 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4142781' 00:07:10.208 killing process with pid 4142781 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4142781 00:07:10.208 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4142781 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:10.467 00:07:10.467 real 0m6.817s 00:07:10.467 user 0m6.567s 00:07:10.467 sys 0m0.671s 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.467 ************************************ 00:07:10.467 END TEST skip_rpc_with_json 00:07:10.467 ************************************ 00:07:10.467 12:24:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:10.467 12:24:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:10.467 12:24:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.467 12:24:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.467 12:24:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.467 ************************************ 00:07:10.467 START TEST skip_rpc_with_delay 00:07:10.467 ************************************ 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.467 [2024-07-15 12:24:05.547163] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:10.467 [2024-07-15 12:24:05.547251] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.467 00:07:10.467 real 0m0.028s 00:07:10.467 user 0m0.015s 00:07:10.467 sys 0m0.013s 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.467 12:24:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:10.467 ************************************ 00:07:10.467 END TEST skip_rpc_with_delay 00:07:10.467 ************************************ 00:07:10.467 12:24:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:10.467 12:24:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:10.726 12:24:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:10.726 12:24:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:10.726 12:24:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.726 12:24:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.726 12:24:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.726 ************************************ 00:07:10.726 START TEST exit_on_failed_rpc_init 00:07:10.726 ************************************ 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4143968 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4143968 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 4143968 ']' 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:10.726 12:24:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.726 [2024-07-15 12:24:05.663900] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:10.726 [2024-07-15 12:24:05.663983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143968 ] 00:07:10.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.726 [2024-07-15 12:24:05.738727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.726 [2024-07-15 12:24:05.829878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.662 [2024-07-15 12:24:06.505794] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:11.662 [2024-07-15 12:24:06.505850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144111 ] 00:07:11.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.662 [2024-07-15 12:24:06.578940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.662 [2024-07-15 12:24:06.658918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.662 [2024-07-15 12:24:06.659007] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:11.662 [2024-07-15 12:24:06.659021] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:11.662 [2024-07-15 12:24:06.659029] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4143968 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 4143968 ']' 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 4143968 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4143968 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4143968' 00:07:11.662 killing process with pid 4143968 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 4143968 00:07:11.662 12:24:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 4143968 00:07:12.261 00:07:12.261 real 0m1.484s 00:07:12.261 user 0m1.644s 00:07:12.261 sys 0m0.455s 00:07:12.261 12:24:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.261 12:24:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:12.261 ************************************ 00:07:12.261 END TEST exit_on_failed_rpc_init 00:07:12.261 ************************************ 00:07:12.261 12:24:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:12.261 12:24:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:12.261 00:07:12.261 real 0m14.140s 00:07:12.261 user 0m13.502s 00:07:12.261 sys 0m1.743s 00:07:12.261 12:24:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.261 12:24:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.261 ************************************ 00:07:12.261 END TEST skip_rpc 00:07:12.261 ************************************ 00:07:12.261 12:24:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.261 12:24:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:12.261 12:24:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.261 12:24:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.261 12:24:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.261 ************************************ 00:07:12.261 START TEST rpc_client 00:07:12.261 ************************************ 00:07:12.261 12:24:07 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:12.261 * Looking for test storage... 00:07:12.261 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:07:12.261 12:24:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:12.261 OK 00:07:12.261 12:24:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:12.261 00:07:12.261 real 0m0.129s 00:07:12.261 user 0m0.058s 00:07:12.261 sys 0m0.080s 00:07:12.261 12:24:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.261 12:24:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:12.261 ************************************ 00:07:12.261 END TEST rpc_client 00:07:12.261 ************************************ 00:07:12.520 12:24:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.520 12:24:07 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:12.520 12:24:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.520 12:24:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.520 12:24:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.520 ************************************ 00:07:12.520 START TEST json_config 00:07:12.520 ************************************ 00:07:12.520 12:24:07 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:12.520 12:24:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.520 12:24:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.520 12:24:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.520 12:24:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.520 12:24:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.520 12:24:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.520 12:24:07 json_config -- paths/export.sh@5 -- # export PATH 00:07:12.520 12:24:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@47 -- # : 0 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.520 12:24:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:12.520 12:24:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:12.520 WARNING: No tests are enabled so not running JSON configuration tests 00:07:12.521 12:24:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:12.521 00:07:12.521 real 0m0.106s 00:07:12.521 user 0m0.057s 00:07:12.521 sys 0m0.050s 00:07:12.521 12:24:07 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.521 12:24:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.521 ************************************ 00:07:12.521 END TEST json_config 00:07:12.521 ************************************ 00:07:12.521 12:24:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.521 12:24:07 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:12.521 12:24:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.521 12:24:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.521 12:24:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.521 ************************************ 00:07:12.521 START TEST json_config_extra_key 00:07:12.521 ************************************ 00:07:12.521 12:24:07 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:12.779 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:07:12.779 12:24:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:12.780 12:24:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.780 12:24:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.780 12:24:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.780 12:24:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.780 12:24:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.780 12:24:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.780 12:24:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:12.780 12:24:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.780 12:24:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:12.780 INFO: launching applications... 00:07:12.780 12:24:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4144416 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:12.780 Waiting for target to run... 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4144416 /var/tmp/spdk_tgt.sock 00:07:12.780 12:24:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:12.780 12:24:07 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 4144416 ']' 00:07:12.780 12:24:07 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:12.780 12:24:07 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.780 12:24:07 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:12.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:12.780 12:24:07 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.780 12:24:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:12.780 [2024-07-15 12:24:07.771801] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:12.780 [2024-07-15 12:24:07.771868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144416 ] 00:07:12.780 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.347 [2024-07-15 12:24:08.258549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.347 [2024-07-15 12:24:08.350913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.605 12:24:08 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.605 12:24:08 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:13.605 00:07:13.605 12:24:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:13.605 INFO: shutting down applications... 00:07:13.605 12:24:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4144416 ]] 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4144416 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4144416 00:07:13.605 12:24:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4144416 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:14.173 12:24:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:14.173 SPDK target shutdown done 00:07:14.173 12:24:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:14.173 Success 00:07:14.173 00:07:14.173 real 0m1.477s 00:07:14.173 user 0m1.053s 00:07:14.173 sys 0m0.611s 00:07:14.173 12:24:09 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.173 12:24:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:14.173 ************************************ 00:07:14.173 END TEST json_config_extra_key 00:07:14.173 ************************************ 00:07:14.173 12:24:09 -- common/autotest_common.sh@1142 -- # return 0 00:07:14.173 12:24:09 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:14.173 12:24:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.173 12:24:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.173 12:24:09 -- common/autotest_common.sh@10 -- # set +x 00:07:14.173 ************************************ 00:07:14.173 START TEST alias_rpc 00:07:14.173 ************************************ 00:07:14.173 12:24:09 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:14.173 * Looking for test storage... 00:07:14.432 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:07:14.432 12:24:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:14.432 12:24:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4144765 00:07:14.432 12:24:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:14.432 12:24:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4144765 00:07:14.432 12:24:09 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 4144765 ']' 00:07:14.432 12:24:09 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.432 12:24:09 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.432 12:24:09 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.432 12:24:09 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.432 12:24:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.432 [2024-07-15 12:24:09.333495] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:14.432 [2024-07-15 12:24:09.333580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144765 ] 00:07:14.432 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.433 [2024-07-15 12:24:09.410037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.433 [2024-07-15 12:24:09.499048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:15.386 12:24:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:15.386 12:24:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4144765 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 4144765 ']' 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 4144765 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4144765 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4144765' 00:07:15.386 killing process with pid 4144765 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@967 -- # kill 4144765 00:07:15.386 12:24:10 alias_rpc -- common/autotest_common.sh@972 -- # wait 4144765 00:07:15.644 00:07:15.644 real 0m1.570s 00:07:15.644 user 0m1.648s 00:07:15.644 sys 0m0.491s 00:07:15.644 12:24:10 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.644 12:24:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.644 ************************************ 00:07:15.644 END TEST alias_rpc 00:07:15.644 ************************************ 00:07:15.902 12:24:10 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.902 12:24:10 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:15.902 12:24:10 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:15.902 12:24:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.902 12:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.902 12:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:15.902 ************************************ 00:07:15.902 START TEST spdkcli_tcp 00:07:15.902 ************************************ 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:15.902 * Looking for test storage... 00:07:15.902 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4145033 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4145033 00:07:15.902 12:24:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 4145033 ']' 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.902 12:24:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.902 [2024-07-15 12:24:10.965142] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:15.902 [2024-07-15 12:24:10.965231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145033 ] 00:07:15.902 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.160 [2024-07-15 12:24:11.042258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.160 [2024-07-15 12:24:11.124632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.160 [2024-07-15 12:24:11.124634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.725 12:24:11 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.725 12:24:11 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:07:16.725 12:24:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:16.725 12:24:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4145065 00:07:16.725 12:24:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:16.984 [ 00:07:16.984 "spdk_get_version", 00:07:16.984 "rpc_get_methods", 00:07:16.984 "trace_get_info", 00:07:16.984 "trace_get_tpoint_group_mask", 00:07:16.984 "trace_disable_tpoint_group", 00:07:16.984 "trace_enable_tpoint_group", 00:07:16.984 "trace_clear_tpoint_mask", 00:07:16.984 "trace_set_tpoint_mask", 00:07:16.984 "vfu_tgt_set_base_path", 00:07:16.984 "framework_get_pci_devices", 00:07:16.984 "framework_get_config", 00:07:16.984 "framework_get_subsystems", 00:07:16.984 "keyring_get_keys", 00:07:16.984 "iobuf_get_stats", 00:07:16.984 "iobuf_set_options", 00:07:16.984 "sock_get_default_impl", 00:07:16.984 "sock_set_default_impl", 00:07:16.984 "sock_impl_set_options", 00:07:16.984 "sock_impl_get_options", 00:07:16.984 "vmd_rescan", 00:07:16.984 "vmd_remove_device", 00:07:16.984 "vmd_enable", 00:07:16.984 "accel_get_stats", 00:07:16.984 "accel_set_options", 00:07:16.984 "accel_set_driver", 00:07:16.984 "accel_crypto_key_destroy", 00:07:16.984 "accel_crypto_keys_get", 00:07:16.984 "accel_crypto_key_create", 00:07:16.984 "accel_assign_opc", 00:07:16.984 "accel_get_module_info", 00:07:16.984 "accel_get_opc_assignments", 00:07:16.984 "notify_get_notifications", 00:07:16.984 "notify_get_types", 00:07:16.984 "bdev_get_histogram", 00:07:16.984 "bdev_enable_histogram", 00:07:16.984 "bdev_set_qos_limit", 00:07:16.984 "bdev_set_qd_sampling_period", 00:07:16.984 "bdev_get_bdevs", 00:07:16.984 "bdev_reset_iostat", 00:07:16.984 "bdev_get_iostat", 00:07:16.984 "bdev_examine", 00:07:16.984 "bdev_wait_for_examine", 00:07:16.984 "bdev_set_options", 00:07:16.984 "scsi_get_devices", 00:07:16.984 "thread_set_cpumask", 00:07:16.984 "framework_get_governor", 00:07:16.984 "framework_get_scheduler", 00:07:16.984 "framework_set_scheduler", 00:07:16.984 "framework_get_reactors", 00:07:16.984 "thread_get_io_channels", 00:07:16.984 "thread_get_pollers", 00:07:16.984 "thread_get_stats", 00:07:16.984 "framework_monitor_context_switch", 00:07:16.984 "spdk_kill_instance", 00:07:16.984 "log_enable_timestamps", 00:07:16.984 "log_get_flags", 00:07:16.984 "log_clear_flag", 00:07:16.984 "log_set_flag", 00:07:16.984 "log_get_level", 00:07:16.984 "log_set_level", 00:07:16.984 "log_get_print_level", 00:07:16.984 "log_set_print_level", 00:07:16.984 "framework_enable_cpumask_locks", 00:07:16.984 "framework_disable_cpumask_locks", 00:07:16.984 "framework_wait_init", 00:07:16.984 "framework_start_init", 00:07:16.984 "virtio_blk_create_transport", 00:07:16.984 "virtio_blk_get_transports", 00:07:16.984 "vhost_controller_set_coalescing", 00:07:16.984 "vhost_get_controllers", 00:07:16.984 "vhost_delete_controller", 00:07:16.984 "vhost_create_blk_controller", 00:07:16.984 "vhost_scsi_controller_remove_target", 00:07:16.984 "vhost_scsi_controller_add_target", 00:07:16.984 "vhost_start_scsi_controller", 00:07:16.984 "vhost_create_scsi_controller", 00:07:16.984 "ublk_recover_disk", 00:07:16.984 "ublk_get_disks", 00:07:16.984 "ublk_stop_disk", 00:07:16.984 "ublk_start_disk", 00:07:16.984 "ublk_destroy_target", 00:07:16.984 "ublk_create_target", 00:07:16.984 "nbd_get_disks", 00:07:16.984 "nbd_stop_disk", 00:07:16.984 "nbd_start_disk", 00:07:16.984 "env_dpdk_get_mem_stats", 00:07:16.984 "nvmf_update_mdns_prr", 00:07:16.984 "nvmf_stop_mdns_prr", 00:07:16.984 "nvmf_publish_mdns_prr", 00:07:16.984 "nvmf_subsystem_get_listeners", 00:07:16.984 "nvmf_subsystem_get_qpairs", 00:07:16.984 "nvmf_subsystem_get_controllers", 00:07:16.984 "nvmf_get_stats", 00:07:16.984 "nvmf_get_transports", 00:07:16.984 "nvmf_create_transport", 00:07:16.984 "nvmf_get_targets", 00:07:16.984 "nvmf_delete_target", 00:07:16.984 "nvmf_create_target", 00:07:16.984 "nvmf_subsystem_allow_any_host", 00:07:16.984 "nvmf_subsystem_remove_host", 00:07:16.984 "nvmf_subsystem_add_host", 00:07:16.984 "nvmf_ns_remove_host", 00:07:16.984 "nvmf_ns_add_host", 00:07:16.984 "nvmf_subsystem_remove_ns", 00:07:16.984 "nvmf_subsystem_add_ns", 00:07:16.984 "nvmf_subsystem_listener_set_ana_state", 00:07:16.984 "nvmf_discovery_get_referrals", 00:07:16.984 "nvmf_discovery_remove_referral", 00:07:16.984 "nvmf_discovery_add_referral", 00:07:16.984 "nvmf_subsystem_remove_listener", 00:07:16.984 "nvmf_subsystem_add_listener", 00:07:16.984 "nvmf_delete_subsystem", 00:07:16.984 "nvmf_create_subsystem", 00:07:16.984 "nvmf_get_subsystems", 00:07:16.984 "nvmf_set_crdt", 00:07:16.984 "nvmf_set_config", 00:07:16.984 "nvmf_set_max_subsystems", 00:07:16.984 "iscsi_get_histogram", 00:07:16.984 "iscsi_enable_histogram", 00:07:16.984 "iscsi_set_options", 00:07:16.984 "iscsi_get_auth_groups", 00:07:16.984 "iscsi_auth_group_remove_secret", 00:07:16.984 "iscsi_auth_group_add_secret", 00:07:16.984 "iscsi_delete_auth_group", 00:07:16.984 "iscsi_create_auth_group", 00:07:16.984 "iscsi_set_discovery_auth", 00:07:16.984 "iscsi_get_options", 00:07:16.984 "iscsi_target_node_request_logout", 00:07:16.984 "iscsi_target_node_set_redirect", 00:07:16.984 "iscsi_target_node_set_auth", 00:07:16.984 "iscsi_target_node_add_lun", 00:07:16.984 "iscsi_get_stats", 00:07:16.984 "iscsi_get_connections", 00:07:16.984 "iscsi_portal_group_set_auth", 00:07:16.984 "iscsi_start_portal_group", 00:07:16.984 "iscsi_delete_portal_group", 00:07:16.984 "iscsi_create_portal_group", 00:07:16.984 "iscsi_get_portal_groups", 00:07:16.984 "iscsi_delete_target_node", 00:07:16.984 "iscsi_target_node_remove_pg_ig_maps", 00:07:16.984 "iscsi_target_node_add_pg_ig_maps", 00:07:16.984 "iscsi_create_target_node", 00:07:16.984 "iscsi_get_target_nodes", 00:07:16.984 "iscsi_delete_initiator_group", 00:07:16.984 "iscsi_initiator_group_remove_initiators", 00:07:16.984 "iscsi_initiator_group_add_initiators", 00:07:16.984 "iscsi_create_initiator_group", 00:07:16.984 "iscsi_get_initiator_groups", 00:07:16.984 "keyring_linux_set_options", 00:07:16.984 "keyring_file_remove_key", 00:07:16.984 "keyring_file_add_key", 00:07:16.984 "vfu_virtio_create_scsi_endpoint", 00:07:16.984 "vfu_virtio_scsi_remove_target", 00:07:16.984 "vfu_virtio_scsi_add_target", 00:07:16.984 "vfu_virtio_create_blk_endpoint", 00:07:16.984 "vfu_virtio_delete_endpoint", 00:07:16.984 "iaa_scan_accel_module", 00:07:16.984 "dsa_scan_accel_module", 00:07:16.984 "ioat_scan_accel_module", 00:07:16.984 "accel_error_inject_error", 00:07:16.984 "bdev_iscsi_delete", 00:07:16.984 "bdev_iscsi_create", 00:07:16.984 "bdev_iscsi_set_options", 00:07:16.984 "bdev_virtio_attach_controller", 00:07:16.984 "bdev_virtio_scsi_get_devices", 00:07:16.984 "bdev_virtio_detach_controller", 00:07:16.984 "bdev_virtio_blk_set_hotplug", 00:07:16.984 "bdev_ftl_set_property", 00:07:16.984 "bdev_ftl_get_properties", 00:07:16.984 "bdev_ftl_get_stats", 00:07:16.984 "bdev_ftl_unmap", 00:07:16.984 "bdev_ftl_unload", 00:07:16.984 "bdev_ftl_delete", 00:07:16.984 "bdev_ftl_load", 00:07:16.984 "bdev_ftl_create", 00:07:16.984 "bdev_aio_delete", 00:07:16.984 "bdev_aio_rescan", 00:07:16.984 "bdev_aio_create", 00:07:16.984 "blobfs_create", 00:07:16.984 "blobfs_detect", 00:07:16.984 "blobfs_set_cache_size", 00:07:16.984 "bdev_zone_block_delete", 00:07:16.984 "bdev_zone_block_create", 00:07:16.984 "bdev_delay_delete", 00:07:16.984 "bdev_delay_create", 00:07:16.984 "bdev_delay_update_latency", 00:07:16.984 "bdev_split_delete", 00:07:16.984 "bdev_split_create", 00:07:16.984 "bdev_error_inject_error", 00:07:16.984 "bdev_error_delete", 00:07:16.984 "bdev_error_create", 00:07:16.984 "bdev_raid_set_options", 00:07:16.984 "bdev_raid_remove_base_bdev", 00:07:16.984 "bdev_raid_add_base_bdev", 00:07:16.984 "bdev_raid_delete", 00:07:16.985 "bdev_raid_create", 00:07:16.985 "bdev_raid_get_bdevs", 00:07:16.985 "bdev_lvol_set_parent_bdev", 00:07:16.985 "bdev_lvol_set_parent", 00:07:16.985 "bdev_lvol_check_shallow_copy", 00:07:16.985 "bdev_lvol_start_shallow_copy", 00:07:16.985 "bdev_lvol_grow_lvstore", 00:07:16.985 "bdev_lvol_get_lvols", 00:07:16.985 "bdev_lvol_get_lvstores", 00:07:16.985 "bdev_lvol_delete", 00:07:16.985 "bdev_lvol_set_read_only", 00:07:16.985 "bdev_lvol_resize", 00:07:16.985 "bdev_lvol_decouple_parent", 00:07:16.985 "bdev_lvol_inflate", 00:07:16.985 "bdev_lvol_rename", 00:07:16.985 "bdev_lvol_clone_bdev", 00:07:16.985 "bdev_lvol_clone", 00:07:16.985 "bdev_lvol_snapshot", 00:07:16.985 "bdev_lvol_create", 00:07:16.985 "bdev_lvol_delete_lvstore", 00:07:16.985 "bdev_lvol_rename_lvstore", 00:07:16.985 "bdev_lvol_create_lvstore", 00:07:16.985 "bdev_passthru_delete", 00:07:16.985 "bdev_passthru_create", 00:07:16.985 "bdev_nvme_cuse_unregister", 00:07:16.985 "bdev_nvme_cuse_register", 00:07:16.985 "bdev_opal_new_user", 00:07:16.985 "bdev_opal_set_lock_state", 00:07:16.985 "bdev_opal_delete", 00:07:16.985 "bdev_opal_get_info", 00:07:16.985 "bdev_opal_create", 00:07:16.985 "bdev_nvme_opal_revert", 00:07:16.985 "bdev_nvme_opal_init", 00:07:16.985 "bdev_nvme_send_cmd", 00:07:16.985 "bdev_nvme_get_path_iostat", 00:07:16.985 "bdev_nvme_get_mdns_discovery_info", 00:07:16.985 "bdev_nvme_stop_mdns_discovery", 00:07:16.985 "bdev_nvme_start_mdns_discovery", 00:07:16.985 "bdev_nvme_set_multipath_policy", 00:07:16.985 "bdev_nvme_set_preferred_path", 00:07:16.985 "bdev_nvme_get_io_paths", 00:07:16.985 "bdev_nvme_remove_error_injection", 00:07:16.985 "bdev_nvme_add_error_injection", 00:07:16.985 "bdev_nvme_get_discovery_info", 00:07:16.985 "bdev_nvme_stop_discovery", 00:07:16.985 "bdev_nvme_start_discovery", 00:07:16.985 "bdev_nvme_get_controller_health_info", 00:07:16.985 "bdev_nvme_disable_controller", 00:07:16.985 "bdev_nvme_enable_controller", 00:07:16.985 "bdev_nvme_reset_controller", 00:07:16.985 "bdev_nvme_get_transport_statistics", 00:07:16.985 "bdev_nvme_apply_firmware", 00:07:16.985 "bdev_nvme_detach_controller", 00:07:16.985 "bdev_nvme_get_controllers", 00:07:16.985 "bdev_nvme_attach_controller", 00:07:16.985 "bdev_nvme_set_hotplug", 00:07:16.985 "bdev_nvme_set_options", 00:07:16.985 "bdev_null_resize", 00:07:16.985 "bdev_null_delete", 00:07:16.985 "bdev_null_create", 00:07:16.985 "bdev_malloc_delete", 00:07:16.985 "bdev_malloc_create" 00:07:16.985 ] 00:07:16.985 12:24:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:16.985 12:24:11 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.985 12:24:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.985 12:24:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:16.985 12:24:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4145033 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 4145033 ']' 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 4145033 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4145033 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4145033' 00:07:16.985 killing process with pid 4145033 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 4145033 00:07:16.985 12:24:12 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 4145033 00:07:17.551 00:07:17.551 real 0m1.558s 00:07:17.551 user 0m2.843s 00:07:17.551 sys 0m0.509s 00:07:17.551 12:24:12 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.551 12:24:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 ************************************ 00:07:17.551 END TEST spdkcli_tcp 00:07:17.551 ************************************ 00:07:17.551 12:24:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:17.551 12:24:12 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:17.551 12:24:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.551 12:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.551 12:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 ************************************ 00:07:17.551 START TEST dpdk_mem_utility 00:07:17.551 ************************************ 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:17.551 * Looking for test storage... 00:07:17.551 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:07:17.551 12:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:17.551 12:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4145289 00:07:17.551 12:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.551 12:24:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4145289 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 4145289 ']' 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.551 12:24:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:17.551 [2024-07-15 12:24:12.590413] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:17.551 [2024-07-15 12:24:12.590502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145289 ] 00:07:17.551 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.551 [2024-07-15 12:24:12.666346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.809 [2024-07-15 12:24:12.749080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.376 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.376 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:18.376 12:24:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:18.376 12:24:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:18.376 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.376 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.376 { 00:07:18.376 "filename": "/tmp/spdk_mem_dump.txt" 00:07:18.376 } 00:07:18.376 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.376 12:24:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:18.376 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:18.377 1 heaps totaling size 814.000000 MiB 00:07:18.377 size: 814.000000 MiB heap id: 0 00:07:18.377 end heaps---------- 00:07:18.377 8 mempools totaling size 598.116089 MiB 00:07:18.377 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:18.377 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:18.377 size: 84.521057 MiB name: bdev_io_4145289 00:07:18.377 size: 51.011292 MiB name: evtpool_4145289 00:07:18.377 size: 50.003479 MiB name: msgpool_4145289 00:07:18.377 size: 21.763794 MiB name: PDU_Pool 00:07:18.377 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:18.377 size: 0.026123 MiB name: Session_Pool 00:07:18.377 end mempools------- 00:07:18.377 6 memzones totaling size 4.142822 MiB 00:07:18.377 size: 1.000366 MiB name: RG_ring_0_4145289 00:07:18.377 size: 1.000366 MiB name: RG_ring_1_4145289 00:07:18.377 size: 1.000366 MiB name: RG_ring_4_4145289 00:07:18.377 size: 1.000366 MiB name: RG_ring_5_4145289 00:07:18.377 size: 0.125366 MiB name: RG_ring_2_4145289 00:07:18.377 size: 0.015991 MiB name: RG_ring_3_4145289 00:07:18.377 end memzones------- 00:07:18.377 12:24:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:18.635 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:18.635 list of free elements. size: 12.519348 MiB 00:07:18.635 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:18.635 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:18.635 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:18.635 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:18.635 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:18.635 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:18.635 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:18.635 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:18.635 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:18.635 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:18.635 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:18.635 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:18.635 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:18.635 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:18.635 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:18.635 list of standard malloc elements. size: 199.218079 MiB 00:07:18.635 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:18.635 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:18.635 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:18.635 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:18.635 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:18.635 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:18.635 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:18.635 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:18.635 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:18.635 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:18.635 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:18.635 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:18.635 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:18.635 list of memzone associated elements. size: 602.262573 MiB 00:07:18.635 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:18.635 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:18.635 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:18.635 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:18.635 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:18.635 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4145289_0 00:07:18.635 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:18.635 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4145289_0 00:07:18.635 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:18.635 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4145289_0 00:07:18.635 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:18.635 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:18.635 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:18.635 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:18.635 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:18.635 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4145289 00:07:18.635 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:18.635 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4145289 00:07:18.635 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:18.635 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4145289 00:07:18.635 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:18.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:18.635 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:18.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:18.635 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:18.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:18.635 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:18.635 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:18.635 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:18.635 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4145289 00:07:18.635 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:18.635 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4145289 00:07:18.635 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:18.635 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4145289 00:07:18.635 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:18.635 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4145289 00:07:18.635 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:18.635 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4145289 00:07:18.635 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:18.635 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:18.635 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:18.635 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:18.635 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:18.635 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:18.636 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:18.636 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4145289 00:07:18.636 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:18.636 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:18.636 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:18.636 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:18.636 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:18.636 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4145289 00:07:18.636 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:18.636 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:18.636 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:18.636 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4145289 00:07:18.636 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:18.636 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4145289 00:07:18.636 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:18.636 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:18.636 12:24:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:18.636 12:24:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4145289 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 4145289 ']' 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 4145289 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4145289 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4145289' 00:07:18.636 killing process with pid 4145289 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 4145289 00:07:18.636 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 4145289 00:07:18.894 00:07:18.894 real 0m1.422s 00:07:18.894 user 0m1.447s 00:07:18.894 sys 0m0.456s 00:07:18.894 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.894 12:24:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.894 ************************************ 00:07:18.894 END TEST dpdk_mem_utility 00:07:18.894 ************************************ 00:07:18.894 12:24:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:18.894 12:24:13 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:18.894 12:24:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.894 12:24:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.894 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.894 ************************************ 00:07:18.894 START TEST event 00:07:18.894 ************************************ 00:07:18.894 12:24:13 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:19.178 * Looking for test storage... 00:07:19.178 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:19.178 12:24:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:19.178 12:24:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:19.178 12:24:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:19.178 12:24:14 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:19.178 12:24:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.178 12:24:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.178 ************************************ 00:07:19.178 START TEST event_perf 00:07:19.178 ************************************ 00:07:19.178 12:24:14 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:19.178 Running I/O for 1 seconds...[2024-07-15 12:24:14.139961] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:19.178 [2024-07-15 12:24:14.140071] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145521 ] 00:07:19.178 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.178 [2024-07-15 12:24:14.218326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.436 [2024-07-15 12:24:14.306705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.437 [2024-07-15 12:24:14.306792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.437 [2024-07-15 12:24:14.306870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.437 [2024-07-15 12:24:14.306872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.373 Running I/O for 1 seconds... 00:07:20.373 lcore 0: 186982 00:07:20.373 lcore 1: 186984 00:07:20.373 lcore 2: 186981 00:07:20.373 lcore 3: 186982 00:07:20.373 done. 00:07:20.373 00:07:20.373 real 0m1.261s 00:07:20.373 user 0m4.148s 00:07:20.373 sys 0m0.108s 00:07:20.373 12:24:15 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.373 12:24:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.373 ************************************ 00:07:20.373 END TEST event_perf 00:07:20.373 ************************************ 00:07:20.373 12:24:15 event -- common/autotest_common.sh@1142 -- # return 0 00:07:20.373 12:24:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:20.373 12:24:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.373 12:24:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.373 12:24:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.373 ************************************ 00:07:20.373 START TEST event_reactor 00:07:20.373 ************************************ 00:07:20.373 12:24:15 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:20.373 [2024-07-15 12:24:15.487632] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:20.373 [2024-07-15 12:24:15.487717] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145722 ] 00:07:20.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.632 [2024-07-15 12:24:15.564719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.632 [2024-07-15 12:24:15.648662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.006 test_start 00:07:22.006 oneshot 00:07:22.006 tick 100 00:07:22.006 tick 100 00:07:22.006 tick 250 00:07:22.006 tick 100 00:07:22.006 tick 100 00:07:22.006 tick 100 00:07:22.006 tick 250 00:07:22.006 tick 500 00:07:22.006 tick 100 00:07:22.006 tick 100 00:07:22.006 tick 250 00:07:22.006 tick 100 00:07:22.006 tick 100 00:07:22.006 test_end 00:07:22.006 00:07:22.006 real 0m1.253s 00:07:22.007 user 0m1.153s 00:07:22.007 sys 0m0.095s 00:07:22.007 12:24:16 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.007 12:24:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 ************************************ 00:07:22.007 END TEST event_reactor 00:07:22.007 ************************************ 00:07:22.007 12:24:16 event -- common/autotest_common.sh@1142 -- # return 0 00:07:22.007 12:24:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:22.007 12:24:16 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:22.007 12:24:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.007 12:24:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.007 ************************************ 00:07:22.007 START TEST event_reactor_perf 00:07:22.007 ************************************ 00:07:22.007 12:24:16 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:22.007 [2024-07-15 12:24:16.823361] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:22.007 [2024-07-15 12:24:16.823452] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145919 ] 00:07:22.007 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.007 [2024-07-15 12:24:16.899875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.007 [2024-07-15 12:24:16.988162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.943 test_start 00:07:22.943 test_end 00:07:22.943 Performance: 955688 events per second 00:07:22.943 00:07:22.943 real 0m1.255s 00:07:22.943 user 0m1.154s 00:07:22.943 sys 0m0.097s 00:07:22.943 12:24:18 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.943 12:24:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.943 ************************************ 00:07:22.943 END TEST event_reactor_perf 00:07:22.943 ************************************ 00:07:23.202 12:24:18 event -- common/autotest_common.sh@1142 -- # return 0 00:07:23.202 12:24:18 event -- event/event.sh@49 -- # uname -s 00:07:23.202 12:24:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:23.202 12:24:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:23.202 12:24:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.202 12:24:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.202 12:24:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.202 ************************************ 00:07:23.202 START TEST event_scheduler 00:07:23.202 ************************************ 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:23.202 * Looking for test storage... 00:07:23.202 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:07:23.202 12:24:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:23.202 12:24:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4146137 00:07:23.202 12:24:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.202 12:24:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4146137 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 4146137 ']' 00:07:23.202 12:24:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.202 12:24:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:23.202 [2024-07-15 12:24:18.273838] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:23.202 [2024-07-15 12:24:18.273915] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146137 ] 00:07:23.202 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.461 [2024-07-15 12:24:18.344384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.461 [2024-07-15 12:24:18.437412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.461 [2024-07-15 12:24:18.437487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.461 [2024-07-15 12:24:18.437507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.461 [2024-07-15 12:24:18.437508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:24.027 12:24:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.027 [2024-07-15 12:24:19.132025] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:24.027 [2024-07-15 12:24:19.132045] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:24.027 [2024-07-15 12:24:19.132057] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:24.027 [2024-07-15 12:24:19.132065] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:24.027 [2024-07-15 12:24:19.132072] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.027 12:24:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.027 12:24:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 [2024-07-15 12:24:19.207222] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:24.286 12:24:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:24.286 12:24:19 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.286 12:24:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 ************************************ 00:07:24.286 START TEST scheduler_create_thread 00:07:24.286 ************************************ 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 2 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 3 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 4 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 5 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 6 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 7 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 8 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 9 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.286 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.287 10 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.287 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.854 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.854 12:24:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:24.854 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.854 12:24:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.230 12:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.230 12:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:26.231 12:24:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:26.231 12:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.231 12:24:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.607 12:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.607 00:07:27.607 real 0m3.100s 00:07:27.607 user 0m0.020s 00:07:27.607 sys 0m0.011s 00:07:27.607 12:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.607 12:24:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.607 ************************************ 00:07:27.607 END TEST scheduler_create_thread 00:07:27.607 ************************************ 00:07:27.607 12:24:22 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:27.607 12:24:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:27.607 12:24:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4146137 00:07:27.607 12:24:22 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 4146137 ']' 00:07:27.607 12:24:22 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 4146137 00:07:27.607 12:24:22 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4146137 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4146137' 00:07:27.608 killing process with pid 4146137 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 4146137 00:07:27.608 12:24:22 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 4146137 00:07:27.608 [2024-07-15 12:24:22.726397] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:27.866 00:07:27.866 real 0m4.804s 00:07:27.866 user 0m9.356s 00:07:27.866 sys 0m0.444s 00:07:27.866 12:24:22 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.866 12:24:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:27.866 ************************************ 00:07:27.866 END TEST event_scheduler 00:07:27.866 ************************************ 00:07:28.125 12:24:22 event -- common/autotest_common.sh@1142 -- # return 0 00:07:28.125 12:24:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:28.125 12:24:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:28.125 12:24:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.125 12:24:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.125 12:24:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.125 ************************************ 00:07:28.125 START TEST app_repeat 00:07:28.125 ************************************ 00:07:28.125 12:24:23 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4146888 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4146888' 00:07:28.125 Process app_repeat pid: 4146888 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:28.125 spdk_app_start Round 0 00:07:28.125 12:24:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4146888 /var/tmp/spdk-nbd.sock 00:07:28.125 12:24:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4146888 ']' 00:07:28.125 12:24:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.125 12:24:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.125 12:24:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.126 12:24:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.126 12:24:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:28.126 [2024-07-15 12:24:23.075518] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:28.126 [2024-07-15 12:24:23.075620] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146888 ] 00:07:28.126 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.126 [2024-07-15 12:24:23.154451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.126 [2024-07-15 12:24:23.243608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.126 [2024-07-15 12:24:23.243612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.060 12:24:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.060 12:24:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:29.060 12:24:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.060 Malloc0 00:07:29.060 12:24:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.320 Malloc1 00:07:29.320 12:24:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.320 12:24:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:29.579 /dev/nbd0 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.579 1+0 records in 00:07:29.579 1+0 records out 00:07:29.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254397 s, 16.1 MB/s 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:29.579 /dev/nbd1 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:29.579 12:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:29.579 12:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.837 1+0 records in 00:07:29.837 1+0 records out 00:07:29.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192532 s, 21.3 MB/s 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:29.837 12:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:29.837 { 00:07:29.837 "nbd_device": "/dev/nbd0", 00:07:29.837 "bdev_name": "Malloc0" 00:07:29.837 }, 00:07:29.837 { 00:07:29.837 "nbd_device": "/dev/nbd1", 00:07:29.837 "bdev_name": "Malloc1" 00:07:29.837 } 00:07:29.837 ]' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:29.837 { 00:07:29.837 "nbd_device": "/dev/nbd0", 00:07:29.837 "bdev_name": "Malloc0" 00:07:29.837 }, 00:07:29.837 { 00:07:29.837 "nbd_device": "/dev/nbd1", 00:07:29.837 "bdev_name": "Malloc1" 00:07:29.837 } 00:07:29.837 ]' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:29.837 /dev/nbd1' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:29.837 /dev/nbd1' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:29.837 12:24:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.096 256+0 records in 00:07:30.096 256+0 records out 00:07:30.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110557 s, 94.8 MB/s 00:07:30.096 12:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.096 12:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.096 256+0 records in 00:07:30.096 256+0 records out 00:07:30.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210806 s, 49.7 MB/s 00:07:30.096 12:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.096 12:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.096 256+0 records in 00:07:30.096 256+0 records out 00:07:30.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224374 s, 46.7 MB/s 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.096 12:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.389 12:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:30.647 12:24:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:30.647 12:24:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:30.906 12:24:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:31.165 [2024-07-15 12:24:26.037609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.165 [2024-07-15 12:24:26.119007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.165 [2024-07-15 12:24:26.119009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.165 [2024-07-15 12:24:26.164647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:31.165 [2024-07-15 12:24:26.164699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:34.473 12:24:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:34.473 12:24:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:34.473 spdk_app_start Round 1 00:07:34.473 12:24:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4146888 /var/tmp/spdk-nbd.sock 00:07:34.473 12:24:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4146888 ']' 00:07:34.473 12:24:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.473 12:24:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.473 12:24:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.473 12:24:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.473 12:24:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:34.473 12:24:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.473 12:24:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:34.473 12:24:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.473 Malloc0 00:07:34.473 12:24:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.473 Malloc1 00:07:34.473 12:24:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.473 /dev/nbd0 00:07:34.473 12:24:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.731 1+0 records in 00:07:34.731 1+0 records out 00:07:34.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216053 s, 19.0 MB/s 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:34.731 /dev/nbd1 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.731 1+0 records in 00:07:34.731 1+0 records out 00:07:34.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289001 s, 14.2 MB/s 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:34.731 12:24:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.731 12:24:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.989 { 00:07:34.989 "nbd_device": "/dev/nbd0", 00:07:34.989 "bdev_name": "Malloc0" 00:07:34.989 }, 00:07:34.989 { 00:07:34.989 "nbd_device": "/dev/nbd1", 00:07:34.989 "bdev_name": "Malloc1" 00:07:34.989 } 00:07:34.989 ]' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.989 { 00:07:34.989 "nbd_device": "/dev/nbd0", 00:07:34.989 "bdev_name": "Malloc0" 00:07:34.989 }, 00:07:34.989 { 00:07:34.989 "nbd_device": "/dev/nbd1", 00:07:34.989 "bdev_name": "Malloc1" 00:07:34.989 } 00:07:34.989 ]' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.989 /dev/nbd1' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.989 /dev/nbd1' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.989 256+0 records in 00:07:34.989 256+0 records out 00:07:34.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102256 s, 103 MB/s 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.989 256+0 records in 00:07:34.989 256+0 records out 00:07:34.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209058 s, 50.2 MB/s 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.989 12:24:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:35.247 256+0 records in 00:07:35.247 256+0 records out 00:07:35.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224169 s, 46.8 MB/s 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.247 12:24:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.504 12:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:35.762 12:24:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:35.762 12:24:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:36.020 12:24:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:36.278 [2024-07-15 12:24:31.191437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.278 [2024-07-15 12:24:31.273156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.278 [2024-07-15 12:24:31.273158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.278 [2024-07-15 12:24:31.319835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.278 [2024-07-15 12:24:31.319884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:39.563 12:24:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.563 12:24:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:39.563 spdk_app_start Round 2 00:07:39.563 12:24:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4146888 /var/tmp/spdk-nbd.sock 00:07:39.563 12:24:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4146888 ']' 00:07:39.563 12:24:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.563 12:24:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.563 12:24:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.563 12:24:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.563 12:24:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 12:24:34 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.563 12:24:34 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:39.563 12:24:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.563 Malloc0 00:07:39.563 12:24:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.563 Malloc1 00:07:39.563 12:24:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.563 12:24:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:39.822 /dev/nbd0 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.822 1+0 records in 00:07:39.822 1+0 records out 00:07:39.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238049 s, 17.2 MB/s 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.822 /dev/nbd1 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.822 12:24:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:39.822 12:24:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.823 12:24:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.082 1+0 records in 00:07:40.082 1+0 records out 00:07:40.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157748 s, 26.0 MB/s 00:07:40.082 12:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:40.082 12:24:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:40.082 12:24:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:40.082 12:24:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.082 12:24:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:40.082 12:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.082 12:24:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.082 12:24:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.082 12:24:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.082 12:24:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.082 { 00:07:40.082 "nbd_device": "/dev/nbd0", 00:07:40.082 "bdev_name": "Malloc0" 00:07:40.082 }, 00:07:40.082 { 00:07:40.082 "nbd_device": "/dev/nbd1", 00:07:40.082 "bdev_name": "Malloc1" 00:07:40.082 } 00:07:40.082 ]' 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.082 { 00:07:40.082 "nbd_device": "/dev/nbd0", 00:07:40.082 "bdev_name": "Malloc0" 00:07:40.082 }, 00:07:40.082 { 00:07:40.082 "nbd_device": "/dev/nbd1", 00:07:40.082 "bdev_name": "Malloc1" 00:07:40.082 } 00:07:40.082 ]' 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.082 /dev/nbd1' 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.082 /dev/nbd1' 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.082 12:24:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.083 12:24:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:40.083 256+0 records in 00:07:40.083 256+0 records out 00:07:40.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523021 s, 200 MB/s 00:07:40.083 12:24:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.083 12:24:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.341 256+0 records in 00:07:40.341 256+0 records out 00:07:40.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208971 s, 50.2 MB/s 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.341 256+0 records in 00:07:40.341 256+0 records out 00:07:40.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222209 s, 47.2 MB/s 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.341 12:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.600 12:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.858 12:24:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.858 12:24:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.117 12:24:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:41.376 [2024-07-15 12:24:36.309513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.376 [2024-07-15 12:24:36.387873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.376 [2024-07-15 12:24:36.387874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.376 [2024-07-15 12:24:36.428307] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:41.376 [2024-07-15 12:24:36.428352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:44.661 12:24:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4146888 /var/tmp/spdk-nbd.sock 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4146888 ']' 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.661 12:24:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:44.662 12:24:39 event.app_repeat -- event/event.sh@39 -- # killprocess 4146888 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 4146888 ']' 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 4146888 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4146888 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4146888' 00:07:44.662 killing process with pid 4146888 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@967 -- # kill 4146888 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@972 -- # wait 4146888 00:07:44.662 spdk_app_start is called in Round 0. 00:07:44.662 Shutdown signal received, stop current app iteration 00:07:44.662 Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 reinitialization... 00:07:44.662 spdk_app_start is called in Round 1. 00:07:44.662 Shutdown signal received, stop current app iteration 00:07:44.662 Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 reinitialization... 00:07:44.662 spdk_app_start is called in Round 2. 00:07:44.662 Shutdown signal received, stop current app iteration 00:07:44.662 Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 reinitialization... 00:07:44.662 spdk_app_start is called in Round 3. 00:07:44.662 Shutdown signal received, stop current app iteration 00:07:44.662 12:24:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:44.662 12:24:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:44.662 00:07:44.662 real 0m16.479s 00:07:44.662 user 0m34.759s 00:07:44.662 sys 0m3.291s 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.662 12:24:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.662 ************************************ 00:07:44.662 END TEST app_repeat 00:07:44.662 ************************************ 00:07:44.662 12:24:39 event -- common/autotest_common.sh@1142 -- # return 0 00:07:44.662 12:24:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:44.662 12:24:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:44.662 12:24:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.662 12:24:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.662 12:24:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.662 ************************************ 00:07:44.662 START TEST cpu_locks 00:07:44.662 ************************************ 00:07:44.662 12:24:39 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:44.662 * Looking for test storage... 00:07:44.662 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:44.662 12:24:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:44.662 12:24:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:44.662 12:24:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:44.662 12:24:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:44.662 12:24:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.662 12:24:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.662 12:24:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.662 ************************************ 00:07:44.662 START TEST default_locks 00:07:44.662 ************************************ 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4149228 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4149228 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4149228 ']' 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.662 12:24:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.662 [2024-07-15 12:24:39.785582] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:44.662 [2024-07-15 12:24:39.785649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149228 ] 00:07:44.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.921 [2024-07-15 12:24:39.860597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.921 [2024-07-15 12:24:39.947289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.485 12:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.485 12:24:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:45.485 12:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4149228 00:07:45.741 12:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.741 12:24:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4149228 00:07:46.305 lslocks: write error 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4149228 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 4149228 ']' 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 4149228 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4149228 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4149228' 00:07:46.305 killing process with pid 4149228 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 4149228 00:07:46.305 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 4149228 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4149228 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4149228 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4149228 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4149228 ']' 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.868 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4149228) - No such process 00:07:46.868 ERROR: process (pid: 4149228) is no longer running 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:46.868 00:07:46.868 real 0m1.962s 00:07:46.868 user 0m2.044s 00:07:46.868 sys 0m0.721s 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.868 12:24:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.868 ************************************ 00:07:46.868 END TEST default_locks 00:07:46.868 ************************************ 00:07:46.868 12:24:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:46.868 12:24:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:46.868 12:24:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.868 12:24:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.868 12:24:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.868 ************************************ 00:07:46.868 START TEST default_locks_via_rpc 00:07:46.868 ************************************ 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4149605 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4149605 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4149605 ']' 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.868 12:24:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.868 [2024-07-15 12:24:41.828085] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:46.868 [2024-07-15 12:24:41.828165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149605 ] 00:07:46.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.868 [2024-07-15 12:24:41.904378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.868 [2024-07-15 12:24:41.989741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:47.799 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4149605 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4149605 00:07:47.800 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.057 12:24:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4149605 00:07:48.057 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 4149605 ']' 00:07:48.057 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 4149605 00:07:48.057 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:48.057 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.057 12:24:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4149605 00:07:48.057 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.057 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.057 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4149605' 00:07:48.057 killing process with pid 4149605 00:07:48.057 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 4149605 00:07:48.057 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 4149605 00:07:48.313 00:07:48.313 real 0m1.558s 00:07:48.313 user 0m1.594s 00:07:48.313 sys 0m0.561s 00:07:48.313 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.313 12:24:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.313 ************************************ 00:07:48.313 END TEST default_locks_via_rpc 00:07:48.313 ************************************ 00:07:48.313 12:24:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:48.313 12:24:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:48.313 12:24:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.313 12:24:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.313 12:24:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.313 ************************************ 00:07:48.313 START TEST non_locking_app_on_locked_coremask 00:07:48.313 ************************************ 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4149813 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4149813 /var/tmp/spdk.sock 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4149813 ']' 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.313 12:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.570 [2024-07-15 12:24:43.458439] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:48.570 [2024-07-15 12:24:43.458500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149813 ] 00:07:48.570 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.570 [2024-07-15 12:24:43.533152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.570 [2024-07-15 12:24:43.623568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4149937 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4149937 /var/tmp/spdk2.sock 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4149937 ']' 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.501 12:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:49.501 [2024-07-15 12:24:44.302661] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:49.501 [2024-07-15 12:24:44.302755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149937 ] 00:07:49.501 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.501 [2024-07-15 12:24:44.405273] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.501 [2024-07-15 12:24:44.405307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.501 [2024-07-15 12:24:44.582198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.064 12:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.064 12:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:50.064 12:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4149813 00:07:50.064 12:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4149813 00:07:50.064 12:24:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.994 lslocks: write error 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4149813 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4149813 ']' 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4149813 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4149813 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4149813' 00:07:50.994 killing process with pid 4149813 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4149813 00:07:50.994 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4149813 00:07:51.628 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4149937 00:07:51.628 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4149937 ']' 00:07:51.628 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4149937 00:07:51.628 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:51.628 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.628 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4149937 00:07:51.886 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.886 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.886 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4149937' 00:07:51.886 killing process with pid 4149937 00:07:51.886 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4149937 00:07:51.886 12:24:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4149937 00:07:52.144 00:07:52.144 real 0m3.705s 00:07:52.144 user 0m3.933s 00:07:52.144 sys 0m1.188s 00:07:52.144 12:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.144 12:24:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.144 ************************************ 00:07:52.144 END TEST non_locking_app_on_locked_coremask 00:07:52.144 ************************************ 00:07:52.144 12:24:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.144 12:24:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:52.144 12:24:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.144 12:24:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.144 12:24:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.144 ************************************ 00:07:52.144 START TEST locking_app_on_unlocked_coremask 00:07:52.144 ************************************ 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4150386 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4150386 /var/tmp/spdk.sock 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4150386 ']' 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:52.144 12:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.144 [2024-07-15 12:24:47.244877] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:52.144 [2024-07-15 12:24:47.244952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150386 ] 00:07:52.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.401 [2024-07-15 12:24:47.319157] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.401 [2024-07-15 12:24:47.319188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.401 [2024-07-15 12:24:47.408122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4150395 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4150395 /var/tmp/spdk2.sock 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4150395 ']' 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.964 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.964 [2024-07-15 12:24:48.082816] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:52.964 [2024-07-15 12:24:48.082874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150395 ] 00:07:53.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.220 [2024-07-15 12:24:48.181996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.477 [2024-07-15 12:24:48.353618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.041 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.041 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:54.041 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4150395 00:07:54.041 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4150395 00:07:54.041 12:24:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:55.409 lslocks: write error 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4150386 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4150386 ']' 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4150386 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4150386 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4150386' 00:07:55.409 killing process with pid 4150386 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4150386 00:07:55.409 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4150386 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4150395 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4150395 ']' 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4150395 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4150395 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4150395' 00:07:55.974 killing process with pid 4150395 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4150395 00:07:55.974 12:24:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4150395 00:07:56.231 00:07:56.231 real 0m4.014s 00:07:56.231 user 0m4.239s 00:07:56.231 sys 0m1.325s 00:07:56.231 12:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.231 12:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.231 ************************************ 00:07:56.231 END TEST locking_app_on_unlocked_coremask 00:07:56.231 ************************************ 00:07:56.231 12:24:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:56.231 12:24:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:56.232 12:24:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.232 12:24:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.232 12:24:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 ************************************ 00:07:56.232 START TEST locking_app_on_locked_coremask 00:07:56.232 ************************************ 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4150952 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4150952 /var/tmp/spdk.sock 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4150952 ']' 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.232 12:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.232 [2024-07-15 12:24:51.340265] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:56.232 [2024-07-15 12:24:51.340337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150952 ] 00:07:56.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.489 [2024-07-15 12:24:51.415691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.489 [2024-07-15 12:24:51.507855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4150964 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4150964 /var/tmp/spdk2.sock 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4150964 /var/tmp/spdk2.sock 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4150964 /var/tmp/spdk2.sock 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4150964 ']' 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.053 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.310 [2024-07-15 12:24:52.183437] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:57.310 [2024-07-15 12:24:52.183503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4150964 ] 00:07:57.310 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.310 [2024-07-15 12:24:52.283060] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4150952 has claimed it. 00:07:57.310 [2024-07-15 12:24:52.283101] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:57.874 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4150964) - No such process 00:07:57.874 ERROR: process (pid: 4150964) is no longer running 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4150952 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4150952 00:07:57.874 12:24:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.437 lslocks: write error 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4150952 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4150952 ']' 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4150952 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4150952 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4150952' 00:07:58.437 killing process with pid 4150952 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4150952 00:07:58.437 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4150952 00:07:58.695 00:07:58.695 real 0m2.444s 00:07:58.695 user 0m2.592s 00:07:58.695 sys 0m0.778s 00:07:58.695 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.695 12:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.695 ************************************ 00:07:58.695 END TEST locking_app_on_locked_coremask 00:07:58.695 ************************************ 00:07:58.695 12:24:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:58.695 12:24:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:58.695 12:24:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.695 12:24:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.695 12:24:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.953 ************************************ 00:07:58.953 START TEST locking_overlapped_coremask 00:07:58.953 ************************************ 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4151316 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4151316 /var/tmp/spdk.sock 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4151316 ']' 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.953 12:24:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.953 [2024-07-15 12:24:53.868339] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:58.953 [2024-07-15 12:24:53.868400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151316 ] 00:07:58.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.953 [2024-07-15 12:24:53.941523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.953 [2024-07-15 12:24:54.035723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.953 [2024-07-15 12:24:54.035808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.953 [2024-07-15 12:24:54.035810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4151354 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4151354 /var/tmp/spdk2.sock 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4151354 /var/tmp/spdk2.sock 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4151354 /var/tmp/spdk2.sock 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4151354 ']' 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.886 12:24:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.886 [2024-07-15 12:24:54.726875] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:07:59.886 [2024-07-15 12:24:54.726944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151354 ] 00:07:59.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.886 [2024-07-15 12:24:54.825866] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4151316 has claimed it. 00:07:59.886 [2024-07-15 12:24:54.825904] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.452 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4151354) - No such process 00:08:00.452 ERROR: process (pid: 4151354) is no longer running 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4151316 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 4151316 ']' 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 4151316 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4151316 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4151316' 00:08:00.452 killing process with pid 4151316 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 4151316 00:08:00.452 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 4151316 00:08:00.710 00:08:00.710 real 0m1.926s 00:08:00.710 user 0m5.350s 00:08:00.710 sys 0m0.487s 00:08:00.710 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.710 12:24:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.710 ************************************ 00:08:00.710 END TEST locking_overlapped_coremask 00:08:00.710 ************************************ 00:08:00.710 12:24:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:00.710 12:24:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:00.710 12:24:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.710 12:24:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.710 12:24:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.968 ************************************ 00:08:00.969 START TEST locking_overlapped_coremask_via_rpc 00:08:00.969 ************************************ 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4151558 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4151558 /var/tmp/spdk.sock 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4151558 ']' 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:00.969 12:24:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.969 [2024-07-15 12:24:55.873418] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:00.969 [2024-07-15 12:24:55.873501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151558 ] 00:08:00.969 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.969 [2024-07-15 12:24:55.948076] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.969 [2024-07-15 12:24:55.948109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.969 [2024-07-15 12:24:56.042542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.969 [2024-07-15 12:24:56.042634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.969 [2024-07-15 12:24:56.042637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4151731 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4151731 /var/tmp/spdk2.sock 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4151731 ']' 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.903 12:24:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.903 [2024-07-15 12:24:56.734011] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:01.903 [2024-07-15 12:24:56.734081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151731 ] 00:08:01.903 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.903 [2024-07-15 12:24:56.832885] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.903 [2024-07-15 12:24:56.832913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.903 [2024-07-15 12:24:57.000164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.903 [2024-07-15 12:24:57.003582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.903 [2024-07-15 12:24:57.003584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.468 [2024-07-15 12:24:57.573591] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4151558 has claimed it. 00:08:02.468 request: 00:08:02.468 { 00:08:02.468 "method": "framework_enable_cpumask_locks", 00:08:02.468 "req_id": 1 00:08:02.468 } 00:08:02.468 Got JSON-RPC error response 00:08:02.468 response: 00:08:02.468 { 00:08:02.468 "code": -32603, 00:08:02.468 "message": "Failed to claim CPU core: 2" 00:08:02.468 } 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4151558 /var/tmp/spdk.sock 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4151558 ']' 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.468 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.469 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4151731 /var/tmp/spdk2.sock 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4151731 ']' 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.726 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.984 00:08:02.984 real 0m2.127s 00:08:02.984 user 0m0.851s 00:08:02.984 sys 0m0.209s 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.984 12:24:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.984 ************************************ 00:08:02.984 END TEST locking_overlapped_coremask_via_rpc 00:08:02.984 ************************************ 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:02.984 12:24:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:02.984 12:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4151558 ]] 00:08:02.984 12:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4151558 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4151558 ']' 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4151558 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4151558 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4151558' 00:08:02.984 killing process with pid 4151558 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4151558 00:08:02.984 12:24:58 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4151558 00:08:03.549 12:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4151731 ]] 00:08:03.549 12:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4151731 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4151731 ']' 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4151731 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4151731 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4151731' 00:08:03.549 killing process with pid 4151731 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4151731 00:08:03.549 12:24:58 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4151731 00:08:03.806 12:24:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.807 12:24:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:03.807 12:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4151558 ]] 00:08:03.807 12:24:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4151558 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4151558 ']' 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4151558 00:08:03.807 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4151558) - No such process 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4151558 is not found' 00:08:03.807 Process with pid 4151558 is not found 00:08:03.807 12:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4151731 ]] 00:08:03.807 12:24:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4151731 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4151731 ']' 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4151731 00:08:03.807 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4151731) - No such process 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4151731 is not found' 00:08:03.807 Process with pid 4151731 is not found 00:08:03.807 12:24:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.807 00:08:03.807 real 0m19.187s 00:08:03.807 user 0m31.301s 00:08:03.807 sys 0m6.348s 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.807 12:24:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.807 ************************************ 00:08:03.807 END TEST cpu_locks 00:08:03.807 ************************************ 00:08:03.807 12:24:58 event -- common/autotest_common.sh@1142 -- # return 0 00:08:03.807 00:08:03.807 real 0m44.866s 00:08:03.807 user 1m22.074s 00:08:03.807 sys 0m10.850s 00:08:03.807 12:24:58 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.807 12:24:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.807 ************************************ 00:08:03.807 END TEST event 00:08:03.807 ************************************ 00:08:03.807 12:24:58 -- common/autotest_common.sh@1142 -- # return 0 00:08:03.807 12:24:58 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:03.807 12:24:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:03.807 12:24:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.807 12:24:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.807 ************************************ 00:08:03.807 START TEST thread 00:08:03.807 ************************************ 00:08:03.807 12:24:58 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:04.064 * Looking for test storage... 00:08:04.064 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:08:04.064 12:24:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.064 12:24:59 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:04.064 12:24:59 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.064 12:24:59 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.064 ************************************ 00:08:04.064 START TEST thread_poller_perf 00:08:04.064 ************************************ 00:08:04.064 12:24:59 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.064 [2024-07-15 12:24:59.075670] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:04.064 [2024-07-15 12:24:59.075757] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152052 ] 00:08:04.064 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.064 [2024-07-15 12:24:59.150215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.321 [2024-07-15 12:24:59.238727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.321 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:05.253 ====================================== 00:08:05.253 busy:2304116020 (cyc) 00:08:05.253 total_run_count: 836000 00:08:05.253 tsc_hz: 2300000000 (cyc) 00:08:05.253 ====================================== 00:08:05.253 poller_cost: 2756 (cyc), 1198 (nsec) 00:08:05.253 00:08:05.253 real 0m1.263s 00:08:05.253 user 0m1.164s 00:08:05.253 sys 0m0.093s 00:08:05.253 12:25:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.253 12:25:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.253 ************************************ 00:08:05.253 END TEST thread_poller_perf 00:08:05.253 ************************************ 00:08:05.253 12:25:00 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:05.253 12:25:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.253 12:25:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:05.253 12:25:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.253 12:25:00 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.511 ************************************ 00:08:05.511 START TEST thread_poller_perf 00:08:05.511 ************************************ 00:08:05.511 12:25:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.511 [2024-07-15 12:25:00.404856] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:05.511 [2024-07-15 12:25:00.404916] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152235 ] 00:08:05.511 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.511 [2024-07-15 12:25:00.474212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.511 [2024-07-15 12:25:00.558183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.511 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.882 ====================================== 00:08:06.882 busy:2301263336 (cyc) 00:08:06.882 total_run_count: 14296000 00:08:06.882 tsc_hz: 2300000000 (cyc) 00:08:06.882 ====================================== 00:08:06.882 poller_cost: 160 (cyc), 69 (nsec) 00:08:06.882 00:08:06.882 real 0m1.237s 00:08:06.882 user 0m1.143s 00:08:06.882 sys 0m0.090s 00:08:06.883 12:25:01 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.883 12:25:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.883 ************************************ 00:08:06.883 END TEST thread_poller_perf 00:08:06.883 ************************************ 00:08:06.883 12:25:01 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:06.883 12:25:01 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:08:06.883 12:25:01 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:06.883 12:25:01 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.883 12:25:01 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.883 12:25:01 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.883 ************************************ 00:08:06.883 START TEST thread_spdk_lock 00:08:06.883 ************************************ 00:08:06.883 12:25:01 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:06.883 [2024-07-15 12:25:01.726517] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:06.883 [2024-07-15 12:25:01.726646] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152423 ] 00:08:06.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.883 [2024-07-15 12:25:01.800595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.883 [2024-07-15 12:25:01.887460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.883 [2024-07-15 12:25:01.887463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.449 [2024-07-15 12:25:02.370407] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.449 [2024-07-15 12:25:02.370443] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:07.449 [2024-07-15 12:25:02.370453] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x14cdec0 00:08:07.449 [2024-07-15 12:25:02.371350] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.449 [2024-07-15 12:25:02.371454] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.449 [2024-07-15 12:25:02.371473] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:07.449 Starting test contend 00:08:07.449 Worker Delay Wait us Hold us Total us 00:08:07.449 0 3 177307 183091 360399 00:08:07.449 1 5 93896 284027 377924 00:08:07.449 PASS test contend 00:08:07.449 Starting test hold_by_poller 00:08:07.449 PASS test hold_by_poller 00:08:07.449 Starting test hold_by_message 00:08:07.449 PASS test hold_by_message 00:08:07.449 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:08:07.449 100014 assertions passed 00:08:07.449 0 assertions failed 00:08:07.449 00:08:07.449 real 0m0.727s 00:08:07.449 user 0m1.114s 00:08:07.449 sys 0m0.094s 00:08:07.449 12:25:02 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.449 12:25:02 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:08:07.449 ************************************ 00:08:07.449 END TEST thread_spdk_lock 00:08:07.449 ************************************ 00:08:07.449 12:25:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:07.449 00:08:07.449 real 0m3.568s 00:08:07.449 user 0m3.546s 00:08:07.449 sys 0m0.518s 00:08:07.449 12:25:02 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.449 12:25:02 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.449 ************************************ 00:08:07.449 END TEST thread 00:08:07.449 ************************************ 00:08:07.449 12:25:02 -- common/autotest_common.sh@1142 -- # return 0 00:08:07.449 12:25:02 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:08:07.449 12:25:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.449 12:25:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.449 12:25:02 -- common/autotest_common.sh@10 -- # set +x 00:08:07.449 ************************************ 00:08:07.449 START TEST accel 00:08:07.449 ************************************ 00:08:07.449 12:25:02 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:08:07.707 * Looking for test storage... 00:08:07.707 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:07.707 12:25:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:07.707 12:25:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:07.707 12:25:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:07.707 12:25:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4152642 00:08:07.707 12:25:02 accel -- accel/accel.sh@63 -- # waitforlisten 4152642 00:08:07.707 12:25:02 accel -- common/autotest_common.sh@829 -- # '[' -z 4152642 ']' 00:08:07.707 12:25:02 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.707 12:25:02 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.707 12:25:02 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.707 12:25:02 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.707 12:25:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.707 12:25:02 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:07.707 12:25:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:07.707 12:25:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.707 12:25:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.707 12:25:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.707 12:25:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.707 12:25:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.707 12:25:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:07.707 12:25:02 accel -- accel/accel.sh@41 -- # jq -r . 00:08:07.707 [2024-07-15 12:25:02.698217] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:07.707 [2024-07-15 12:25:02.698311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152642 ] 00:08:07.707 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.707 [2024-07-15 12:25:02.773708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.965 [2024-07-15 12:25:02.863057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.530 12:25:03 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.530 12:25:03 accel -- common/autotest_common.sh@862 -- # return 0 00:08:08.530 12:25:03 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:08.530 12:25:03 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:08.530 12:25:03 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:08.530 12:25:03 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:08.530 12:25:03 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:08.530 12:25:03 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:08.530 12:25:03 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.530 12:25:03 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:08.530 12:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.530 12:25:03 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.530 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.530 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.530 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.530 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.530 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.530 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.530 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.530 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.530 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.531 12:25:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.531 12:25:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.531 12:25:03 accel -- accel/accel.sh@75 -- # killprocess 4152642 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@948 -- # '[' -z 4152642 ']' 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@952 -- # kill -0 4152642 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@953 -- # uname 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4152642 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4152642' 00:08:08.531 killing process with pid 4152642 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@967 -- # kill 4152642 00:08:08.531 12:25:03 accel -- common/autotest_common.sh@972 -- # wait 4152642 00:08:09.098 12:25:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:09.098 12:25:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:09.098 12:25:03 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.098 12:25:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.098 12:25:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.098 12:25:03 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:09.098 12:25:03 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:09.098 12:25:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:09.098 12:25:04 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.098 12:25:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:09.098 12:25:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.098 12:25:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:09.098 12:25:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:09.098 12:25:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.098 12:25:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.098 ************************************ 00:08:09.098 START TEST accel_missing_filename 00:08:09.098 ************************************ 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.098 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:09.098 12:25:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:09.098 [2024-07-15 12:25:04.115384] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:09.098 [2024-07-15 12:25:04.115475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152859 ] 00:08:09.098 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.099 [2024-07-15 12:25:04.189980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.407 [2024-07-15 12:25:04.272969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.407 [2024-07-15 12:25:04.312365] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.407 [2024-07-15 12:25:04.373435] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:09.407 A filename is required. 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.407 00:08:09.407 real 0m0.354s 00:08:09.407 user 0m0.251s 00:08:09.407 sys 0m0.139s 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.407 12:25:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:09.407 ************************************ 00:08:09.407 END TEST accel_missing_filename 00:08:09.407 ************************************ 00:08:09.407 12:25:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.407 12:25:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:09.407 12:25:04 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:09.407 12:25:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.407 12:25:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.666 ************************************ 00:08:09.666 START TEST accel_compress_verify 00:08:09.666 ************************************ 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.666 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.666 12:25:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.667 12:25:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:09.667 12:25:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:09.667 [2024-07-15 12:25:04.543576] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:09.667 [2024-07-15 12:25:04.543673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152954 ] 00:08:09.667 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.667 [2024-07-15 12:25:04.621909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.667 [2024-07-15 12:25:04.705149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.667 [2024-07-15 12:25:04.751179] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.925 [2024-07-15 12:25:04.820371] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:09.925 00:08:09.925 Compression does not support the verify option, aborting. 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.925 00:08:09.925 real 0m0.375s 00:08:09.925 user 0m0.267s 00:08:09.925 sys 0m0.146s 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.925 12:25:04 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:09.925 ************************************ 00:08:09.925 END TEST accel_compress_verify 00:08:09.925 ************************************ 00:08:09.925 12:25:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.925 12:25:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:09.925 12:25:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:09.925 12:25:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.925 12:25:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.925 ************************************ 00:08:09.925 START TEST accel_wrong_workload 00:08:09.925 ************************************ 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.925 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:09.925 12:25:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:09.925 Unsupported workload type: foobar 00:08:09.925 [2024-07-15 12:25:04.981073] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:09.925 accel_perf options: 00:08:09.925 [-h help message] 00:08:09.925 [-q queue depth per core] 00:08:09.925 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:09.925 [-T number of threads per core 00:08:09.925 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:09.926 [-t time in seconds] 00:08:09.926 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:09.926 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:09.926 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:09.926 [-l for compress/decompress workloads, name of uncompressed input file 00:08:09.926 [-S for crc32c workload, use this seed value (default 0) 00:08:09.926 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:09.926 [-f for fill workload, use this BYTE value (default 255) 00:08:09.926 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:09.926 [-y verify result if this switch is on] 00:08:09.926 [-a tasks to allocate per core (default: same value as -q)] 00:08:09.926 Can be used to spread operations across a wider range of memory. 00:08:09.926 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:09.926 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.926 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.926 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.926 00:08:09.926 real 0m0.025s 00:08:09.926 user 0m0.013s 00:08:09.926 sys 0m0.012s 00:08:09.926 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.926 12:25:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:09.926 ************************************ 00:08:09.926 END TEST accel_wrong_workload 00:08:09.926 ************************************ 00:08:09.926 Error: writing output failed: Broken pipe 00:08:09.926 12:25:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.926 12:25:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:09.926 12:25:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:09.926 12:25:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.926 12:25:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.184 ************************************ 00:08:10.184 START TEST accel_negative_buffers 00:08:10.184 ************************************ 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.184 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:10.184 12:25:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:10.184 -x option must be non-negative. 00:08:10.184 [2024-07-15 12:25:05.077159] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:10.184 accel_perf options: 00:08:10.184 [-h help message] 00:08:10.184 [-q queue depth per core] 00:08:10.184 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:10.184 [-T number of threads per core 00:08:10.184 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:10.184 [-t time in seconds] 00:08:10.184 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:10.184 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:10.184 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:10.184 [-l for compress/decompress workloads, name of uncompressed input file 00:08:10.184 [-S for crc32c workload, use this seed value (default 0) 00:08:10.184 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:10.184 [-f for fill workload, use this BYTE value (default 255) 00:08:10.185 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:10.185 [-y verify result if this switch is on] 00:08:10.185 [-a tasks to allocate per core (default: same value as -q)] 00:08:10.185 Can be used to spread operations across a wider range of memory. 00:08:10.185 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:10.185 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.185 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.185 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.185 00:08:10.185 real 0m0.028s 00:08:10.185 user 0m0.013s 00:08:10.185 sys 0m0.015s 00:08:10.185 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.185 12:25:05 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:10.185 ************************************ 00:08:10.185 END TEST accel_negative_buffers 00:08:10.185 ************************************ 00:08:10.185 Error: writing output failed: Broken pipe 00:08:10.185 12:25:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.185 12:25:05 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:10.185 12:25:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:10.185 12:25:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.185 12:25:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.185 ************************************ 00:08:10.185 START TEST accel_crc32c 00:08:10.185 ************************************ 00:08:10.185 12:25:05 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:10.185 12:25:05 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:10.185 [2024-07-15 12:25:05.179239] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:10.185 [2024-07-15 12:25:05.179324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153110 ] 00:08:10.185 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.185 [2024-07-15 12:25:05.257487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.443 [2024-07-15 12:25:05.346424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:10.443 12:25:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:11.840 12:25:06 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.840 00:08:11.840 real 0m1.381s 00:08:11.840 user 0m1.238s 00:08:11.840 sys 0m0.145s 00:08:11.840 12:25:06 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.840 12:25:06 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:11.840 ************************************ 00:08:11.840 END TEST accel_crc32c 00:08:11.840 ************************************ 00:08:11.840 12:25:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.840 12:25:06 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:11.840 12:25:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:11.840 12:25:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.840 12:25:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.840 ************************************ 00:08:11.840 START TEST accel_crc32c_C2 00:08:11.840 ************************************ 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:11.840 [2024-07-15 12:25:06.636430] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:11.840 [2024-07-15 12:25:06.636511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153311 ] 00:08:11.840 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.840 [2024-07-15 12:25:06.710957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.840 [2024-07-15 12:25:06.789945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:11.840 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:11.841 12:25:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.214 00:08:13.214 real 0m1.351s 00:08:13.214 user 0m1.219s 00:08:13.214 sys 0m0.133s 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.214 12:25:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:13.214 ************************************ 00:08:13.214 END TEST accel_crc32c_C2 00:08:13.214 ************************************ 00:08:13.214 12:25:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.214 12:25:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:13.214 12:25:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:13.214 12:25:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.214 12:25:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.214 ************************************ 00:08:13.214 START TEST accel_copy 00:08:13.214 ************************************ 00:08:13.214 12:25:08 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:13.214 [2024-07-15 12:25:08.050837] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:13.214 [2024-07-15 12:25:08.050925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153505 ] 00:08:13.214 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.214 [2024-07-15 12:25:08.123597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.214 [2024-07-15 12:25:08.206510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.214 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.215 12:25:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:14.589 12:25:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.589 00:08:14.589 real 0m1.368s 00:08:14.589 user 0m1.230s 00:08:14.589 sys 0m0.140s 00:08:14.589 12:25:09 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.589 12:25:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:14.589 ************************************ 00:08:14.589 END TEST accel_copy 00:08:14.589 ************************************ 00:08:14.589 12:25:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.589 12:25:09 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:14.589 12:25:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:14.589 12:25:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.589 12:25:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.589 ************************************ 00:08:14.589 START TEST accel_fill 00:08:14.589 ************************************ 00:08:14.589 12:25:09 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:14.589 [2024-07-15 12:25:09.491458] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:14.589 [2024-07-15 12:25:09.491547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153699 ] 00:08:14.589 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.589 [2024-07-15 12:25:09.565973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.589 [2024-07-15 12:25:09.649030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:14.589 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:14.590 12:25:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:15.961 12:25:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.961 00:08:15.961 real 0m1.371s 00:08:15.961 user 0m1.233s 00:08:15.961 sys 0m0.140s 00:08:15.961 12:25:10 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.961 12:25:10 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:15.961 ************************************ 00:08:15.961 END TEST accel_fill 00:08:15.961 ************************************ 00:08:15.961 12:25:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.961 12:25:10 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:15.961 12:25:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:15.961 12:25:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.961 12:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.961 ************************************ 00:08:15.961 START TEST accel_copy_crc32c 00:08:15.961 ************************************ 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:15.961 12:25:10 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:15.961 [2024-07-15 12:25:10.940666] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:15.961 [2024-07-15 12:25:10.940744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153898 ] 00:08:15.961 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.961 [2024-07-15 12:25:11.017272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.219 [2024-07-15 12:25:11.100763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:16.219 12:25:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.594 00:08:17.594 real 0m1.375s 00:08:17.594 user 0m1.227s 00:08:17.594 sys 0m0.150s 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.594 12:25:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:17.594 ************************************ 00:08:17.594 END TEST accel_copy_crc32c 00:08:17.594 ************************************ 00:08:17.594 12:25:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.594 12:25:12 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:17.594 12:25:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:17.594 12:25:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.594 12:25:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.594 ************************************ 00:08:17.594 START TEST accel_copy_crc32c_C2 00:08:17.594 ************************************ 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:17.594 [2024-07-15 12:25:12.395046] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:17.594 [2024-07-15 12:25:12.395128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154089 ] 00:08:17.594 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.594 [2024-07-15 12:25:12.471155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.594 [2024-07-15 12:25:12.554674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:17.594 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:17.595 12:25:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.978 00:08:18.978 real 0m1.377s 00:08:18.978 user 0m1.234s 00:08:18.978 sys 0m0.146s 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.978 12:25:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:18.978 ************************************ 00:08:18.978 END TEST accel_copy_crc32c_C2 00:08:18.978 ************************************ 00:08:18.978 12:25:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.978 12:25:13 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:18.978 12:25:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:18.978 12:25:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.978 12:25:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.978 ************************************ 00:08:18.978 START TEST accel_dualcast 00:08:18.978 ************************************ 00:08:18.978 12:25:13 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:18.978 12:25:13 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:18.978 [2024-07-15 12:25:13.845917] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:18.978 [2024-07-15 12:25:13.845991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154292 ] 00:08:18.978 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.978 [2024-07-15 12:25:13.923824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.978 [2024-07-15 12:25:14.006348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:18.978 12:25:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:20.352 12:25:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.352 00:08:20.352 real 0m1.367s 00:08:20.352 user 0m1.228s 00:08:20.352 sys 0m0.142s 00:08:20.352 12:25:15 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.352 12:25:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:20.352 ************************************ 00:08:20.352 END TEST accel_dualcast 00:08:20.352 ************************************ 00:08:20.352 12:25:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.352 12:25:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:20.352 12:25:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:20.352 12:25:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.352 12:25:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.352 ************************************ 00:08:20.352 START TEST accel_compare 00:08:20.352 ************************************ 00:08:20.352 12:25:15 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:20.352 12:25:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:20.352 12:25:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:20.352 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.352 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.352 12:25:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:20.353 12:25:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:20.353 [2024-07-15 12:25:15.286196] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:20.353 [2024-07-15 12:25:15.286283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154532 ] 00:08:20.353 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.353 [2024-07-15 12:25:15.360012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.353 [2024-07-15 12:25:15.441132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.610 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:20.611 12:25:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:21.544 12:25:16 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.544 00:08:21.544 real 0m1.355s 00:08:21.544 user 0m1.222s 00:08:21.544 sys 0m0.135s 00:08:21.544 12:25:16 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.544 12:25:16 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:21.544 ************************************ 00:08:21.544 END TEST accel_compare 00:08:21.544 ************************************ 00:08:21.544 12:25:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:21.544 12:25:16 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:21.544 12:25:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:21.544 12:25:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.544 12:25:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.804 ************************************ 00:08:21.804 START TEST accel_xor 00:08:21.804 ************************************ 00:08:21.804 12:25:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:21.804 [2024-07-15 12:25:16.706599] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:21.804 [2024-07-15 12:25:16.706660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154761 ] 00:08:21.804 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.804 [2024-07-15 12:25:16.773027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.804 [2024-07-15 12:25:16.856618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.804 12:25:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.177 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.178 00:08:23.178 real 0m1.355s 00:08:23.178 user 0m1.229s 00:08:23.178 sys 0m0.127s 00:08:23.178 12:25:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.178 12:25:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:23.178 ************************************ 00:08:23.178 END TEST accel_xor 00:08:23.178 ************************************ 00:08:23.178 12:25:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.178 12:25:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:23.178 12:25:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:23.178 12:25:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.178 12:25:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.178 ************************************ 00:08:23.178 START TEST accel_xor 00:08:23.178 ************************************ 00:08:23.178 12:25:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:23.178 12:25:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:23.178 [2024-07-15 12:25:18.147346] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:23.178 [2024-07-15 12:25:18.147426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154999 ] 00:08:23.178 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.178 [2024-07-15 12:25:18.223612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.436 [2024-07-15 12:25:18.307500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.436 12:25:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:24.811 12:25:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.811 00:08:24.811 real 0m1.376s 00:08:24.811 user 0m1.228s 00:08:24.811 sys 0m0.149s 00:08:24.811 12:25:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.811 12:25:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:24.811 ************************************ 00:08:24.811 END TEST accel_xor 00:08:24.811 ************************************ 00:08:24.811 12:25:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:24.811 12:25:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:24.811 12:25:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:24.811 12:25:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.811 12:25:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.811 ************************************ 00:08:24.811 START TEST accel_dif_verify 00:08:24.811 ************************************ 00:08:24.811 12:25:19 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:24.811 [2024-07-15 12:25:19.601985] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:24.811 [2024-07-15 12:25:19.602066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155228 ] 00:08:24.811 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.811 [2024-07-15 12:25:19.677202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.811 [2024-07-15 12:25:19.763097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.811 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:24.812 12:25:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:26.187 12:25:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.187 00:08:26.187 real 0m1.378s 00:08:26.187 user 0m1.235s 00:08:26.187 sys 0m0.145s 00:08:26.187 12:25:20 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.187 12:25:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:26.187 ************************************ 00:08:26.187 END TEST accel_dif_verify 00:08:26.187 ************************************ 00:08:26.187 12:25:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:26.187 12:25:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:26.187 12:25:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:26.187 12:25:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.187 12:25:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.187 ************************************ 00:08:26.187 START TEST accel_dif_generate 00:08:26.187 ************************************ 00:08:26.187 12:25:21 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:26.187 [2024-07-15 12:25:21.054884] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:26.187 [2024-07-15 12:25:21.054966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155427 ] 00:08:26.187 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.187 [2024-07-15 12:25:21.130418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.187 [2024-07-15 12:25:21.213235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.187 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:26.188 12:25:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:27.562 12:25:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.562 00:08:27.562 real 0m1.373s 00:08:27.562 user 0m1.238s 00:08:27.562 sys 0m0.137s 00:08:27.562 12:25:22 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.562 12:25:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:27.562 ************************************ 00:08:27.562 END TEST accel_dif_generate 00:08:27.562 ************************************ 00:08:27.562 12:25:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:27.562 12:25:22 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:27.562 12:25:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:27.562 12:25:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.562 12:25:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.562 ************************************ 00:08:27.562 START TEST accel_dif_generate_copy 00:08:27.562 ************************************ 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:27.562 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:27.562 [2024-07-15 12:25:22.500966] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:27.562 [2024-07-15 12:25:22.501053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155625 ] 00:08:27.562 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.562 [2024-07-15 12:25:22.573974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.562 [2024-07-15 12:25:22.652666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:27.820 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:27.821 12:25:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.754 00:08:28.754 real 0m1.352s 00:08:28.754 user 0m1.214s 00:08:28.754 sys 0m0.139s 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.754 12:25:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:28.754 ************************************ 00:08:28.754 END TEST accel_dif_generate_copy 00:08:28.754 ************************************ 00:08:28.754 12:25:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.754 12:25:23 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:28.754 12:25:23 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:28.754 12:25:23 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:28.754 12:25:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.754 12:25:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.012 ************************************ 00:08:29.012 START TEST accel_comp 00:08:29.012 ************************************ 00:08:29.012 12:25:23 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:29.012 12:25:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:29.012 [2024-07-15 12:25:23.927665] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:29.012 [2024-07-15 12:25:23.927741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155819 ] 00:08:29.012 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.012 [2024-07-15 12:25:24.002569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.012 [2024-07-15 12:25:24.084823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.012 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:29.013 12:25:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:30.416 12:25:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.416 00:08:30.416 real 0m1.366s 00:08:30.416 user 0m1.221s 00:08:30.416 sys 0m0.146s 00:08:30.417 12:25:25 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.417 12:25:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:30.417 ************************************ 00:08:30.417 END TEST accel_comp 00:08:30.417 ************************************ 00:08:30.417 12:25:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.417 12:25:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:30.417 12:25:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:30.417 12:25:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.417 12:25:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.417 ************************************ 00:08:30.417 START TEST accel_decomp 00:08:30.417 ************************************ 00:08:30.417 12:25:25 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:30.417 12:25:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:30.417 [2024-07-15 12:25:25.366953] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:30.417 [2024-07-15 12:25:25.367036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156018 ] 00:08:30.417 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.417 [2024-07-15 12:25:25.441823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.417 [2024-07-15 12:25:25.524974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:30.675 12:25:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.609 12:25:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.609 00:08:31.609 real 0m1.374s 00:08:31.609 user 0m1.234s 00:08:31.609 sys 0m0.143s 00:08:31.609 12:25:26 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.609 12:25:26 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:31.609 ************************************ 00:08:31.609 END TEST accel_decomp 00:08:31.609 ************************************ 00:08:31.867 12:25:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.867 12:25:26 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:31.867 12:25:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:31.867 12:25:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.867 12:25:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.867 ************************************ 00:08:31.867 START TEST accel_decomp_full 00:08:31.867 ************************************ 00:08:31.867 12:25:26 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:31.867 12:25:26 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:31.867 [2024-07-15 12:25:26.817291] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:31.867 [2024-07-15 12:25:26.817364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156214 ] 00:08:31.867 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.867 [2024-07-15 12:25:26.894434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.867 [2024-07-15 12:25:26.982979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:32.125 12:25:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.495 12:25:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.495 00:08:33.495 real 0m1.390s 00:08:33.495 user 0m1.243s 00:08:33.495 sys 0m0.148s 00:08:33.495 12:25:28 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.495 12:25:28 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:33.495 ************************************ 00:08:33.495 END TEST accel_decomp_full 00:08:33.495 ************************************ 00:08:33.495 12:25:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.495 12:25:28 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:33.495 12:25:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:33.495 12:25:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.495 12:25:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.495 ************************************ 00:08:33.495 START TEST accel_decomp_mcore 00:08:33.495 ************************************ 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:33.495 [2024-07-15 12:25:28.283948] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:33.495 [2024-07-15 12:25:28.284031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156411 ] 00:08:33.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.495 [2024-07-15 12:25:28.359481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.495 [2024-07-15 12:25:28.445470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.495 [2024-07-15 12:25:28.445565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.495 [2024-07-15 12:25:28.445600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.495 [2024-07-15 12:25:28.445602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.495 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.496 12:25:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.866 00:08:34.866 real 0m1.396s 00:08:34.866 user 0m4.634s 00:08:34.866 sys 0m0.156s 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.866 12:25:29 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:34.866 ************************************ 00:08:34.866 END TEST accel_decomp_mcore 00:08:34.866 ************************************ 00:08:34.866 12:25:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:34.866 12:25:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.866 12:25:29 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:34.866 12:25:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.866 12:25:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.866 ************************************ 00:08:34.866 START TEST accel_decomp_full_mcore 00:08:34.866 ************************************ 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:34.866 [2024-07-15 12:25:29.748600] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:34.866 [2024-07-15 12:25:29.748685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156609 ] 00:08:34.866 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.866 [2024-07-15 12:25:29.822710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.866 [2024-07-15 12:25:29.909655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.866 [2024-07-15 12:25:29.909742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.866 [2024-07-15 12:25:29.909818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.866 [2024-07-15 12:25:29.909819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.866 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.867 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.867 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.867 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.867 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.867 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.867 12:25:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.239 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.240 00:08:36.240 real 0m1.403s 00:08:36.240 user 0m4.660s 00:08:36.240 sys 0m0.152s 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.240 12:25:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:36.240 ************************************ 00:08:36.240 END TEST accel_decomp_full_mcore 00:08:36.240 ************************************ 00:08:36.240 12:25:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.240 12:25:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.240 12:25:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:36.240 12:25:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.240 12:25:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.240 ************************************ 00:08:36.240 START TEST accel_decomp_mthread 00:08:36.240 ************************************ 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:36.240 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:36.240 [2024-07-15 12:25:31.220615] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:36.240 [2024-07-15 12:25:31.220697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156814 ] 00:08:36.240 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.240 [2024-07-15 12:25:31.295044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.499 [2024-07-15 12:25:31.375907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.499 12:25:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.432 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.690 00:08:37.690 real 0m1.364s 00:08:37.690 user 0m1.232s 00:08:37.690 sys 0m0.148s 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.690 12:25:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:37.690 ************************************ 00:08:37.690 END TEST accel_decomp_mthread 00:08:37.690 ************************************ 00:08:37.690 12:25:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:37.690 12:25:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:37.690 12:25:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:37.690 12:25:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.690 12:25:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.690 ************************************ 00:08:37.690 START TEST accel_decomp_full_mthread 00:08:37.690 ************************************ 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:37.690 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:37.690 [2024-07-15 12:25:32.653009] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:37.690 [2024-07-15 12:25:32.653091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157008 ] 00:08:37.690 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.690 [2024-07-15 12:25:32.728634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.690 [2024-07-15 12:25:32.811883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.948 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.949 12:25:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.323 00:08:39.323 real 0m1.405s 00:08:39.323 user 0m1.273s 00:08:39.323 sys 0m0.146s 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.323 12:25:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:39.323 ************************************ 00:08:39.323 END TEST accel_decomp_full_mthread 00:08:39.323 ************************************ 00:08:39.323 12:25:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.323 12:25:34 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:39.323 12:25:34 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:39.323 12:25:34 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:39.323 12:25:34 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:39.323 12:25:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.323 12:25:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.323 12:25:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.323 12:25:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.323 12:25:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.323 12:25:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.323 12:25:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.323 12:25:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:39.323 12:25:34 accel -- accel/accel.sh@41 -- # jq -r . 00:08:39.323 ************************************ 00:08:39.323 START TEST accel_dif_functional_tests 00:08:39.323 ************************************ 00:08:39.323 12:25:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:39.323 [2024-07-15 12:25:34.142707] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:39.323 [2024-07-15 12:25:34.142791] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157203 ] 00:08:39.323 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.323 [2024-07-15 12:25:34.224308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.323 [2024-07-15 12:25:34.314508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.323 [2024-07-15 12:25:34.314619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.323 [2024-07-15 12:25:34.314622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.323 00:08:39.323 00:08:39.323 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.323 http://cunit.sourceforge.net/ 00:08:39.323 00:08:39.323 00:08:39.323 Suite: accel_dif 00:08:39.323 Test: verify: DIF generated, GUARD check ...passed 00:08:39.323 Test: verify: DIF generated, APPTAG check ...passed 00:08:39.323 Test: verify: DIF generated, REFTAG check ...passed 00:08:39.323 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:25:34.393905] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:39.323 passed 00:08:39.323 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:25:34.393966] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:39.323 passed 00:08:39.323 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:25:34.393991] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:39.323 passed 00:08:39.323 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:39.323 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:25:34.394039] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:39.323 passed 00:08:39.323 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:39.323 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:39.323 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:39.323 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:25:34.394155] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:39.323 passed 00:08:39.323 Test: verify copy: DIF generated, GUARD check ...passed 00:08:39.323 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:39.323 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:39.324 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:25:34.394278] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:39.324 passed 00:08:39.324 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:25:34.394306] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:39.324 passed 00:08:39.324 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:25:34.394334] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:39.324 passed 00:08:39.324 Test: generate copy: DIF generated, GUARD check ...passed 00:08:39.324 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:39.324 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:39.324 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:39.324 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:39.324 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:39.324 Test: generate copy: iovecs-len validate ...[2024-07-15 12:25:34.394507] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:39.324 passed 00:08:39.324 Test: generate copy: buffer alignment validate ...passed 00:08:39.324 00:08:39.324 Run Summary: Type Total Ran Passed Failed Inactive 00:08:39.324 suites 1 1 n/a 0 0 00:08:39.324 tests 26 26 26 0 0 00:08:39.324 asserts 115 115 115 0 n/a 00:08:39.324 00:08:39.324 Elapsed time = 0.002 seconds 00:08:39.582 00:08:39.582 real 0m0.463s 00:08:39.582 user 0m0.651s 00:08:39.582 sys 0m0.176s 00:08:39.582 12:25:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.582 12:25:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:39.582 ************************************ 00:08:39.582 END TEST accel_dif_functional_tests 00:08:39.582 ************************************ 00:08:39.582 12:25:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.582 00:08:39.582 real 0m32.062s 00:08:39.582 user 0m34.839s 00:08:39.582 sys 0m5.220s 00:08:39.582 12:25:34 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.582 12:25:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.582 ************************************ 00:08:39.582 END TEST accel 00:08:39.582 ************************************ 00:08:39.582 12:25:34 -- common/autotest_common.sh@1142 -- # return 0 00:08:39.582 12:25:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:39.582 12:25:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.582 12:25:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.582 12:25:34 -- common/autotest_common.sh@10 -- # set +x 00:08:39.840 ************************************ 00:08:39.840 START TEST accel_rpc 00:08:39.840 ************************************ 00:08:39.840 12:25:34 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:39.840 * Looking for test storage... 00:08:39.840 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:39.840 12:25:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:39.840 12:25:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4157432 00:08:39.840 12:25:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4157432 00:08:39.840 12:25:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:39.841 12:25:34 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4157432 ']' 00:08:39.841 12:25:34 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.841 12:25:34 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.841 12:25:34 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.841 12:25:34 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.841 12:25:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.841 [2024-07-15 12:25:34.847250] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:39.841 [2024-07-15 12:25:34.847323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157432 ] 00:08:39.841 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.841 [2024-07-15 12:25:34.922777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.099 [2024-07-15 12:25:35.013551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.665 12:25:35 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.665 12:25:35 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:40.665 12:25:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:40.665 12:25:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:40.665 12:25:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:40.665 12:25:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:40.665 12:25:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:40.665 12:25:35 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.665 12:25:35 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.665 12:25:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.665 ************************************ 00:08:40.665 START TEST accel_assign_opcode 00:08:40.665 ************************************ 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.665 [2024-07-15 12:25:35.723670] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.665 [2024-07-15 12:25:35.731670] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.665 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.923 software 00:08:40.923 00:08:40.923 real 0m0.260s 00:08:40.923 user 0m0.046s 00:08:40.923 sys 0m0.014s 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.923 12:25:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.923 ************************************ 00:08:40.923 END TEST accel_assign_opcode 00:08:40.923 ************************************ 00:08:40.923 12:25:36 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:40.923 12:25:36 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4157432 00:08:40.923 12:25:36 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4157432 ']' 00:08:40.923 12:25:36 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4157432 00:08:40.923 12:25:36 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:40.923 12:25:36 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.923 12:25:36 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4157432 00:08:41.181 12:25:36 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.181 12:25:36 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.181 12:25:36 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4157432' 00:08:41.181 killing process with pid 4157432 00:08:41.181 12:25:36 accel_rpc -- common/autotest_common.sh@967 -- # kill 4157432 00:08:41.181 12:25:36 accel_rpc -- common/autotest_common.sh@972 -- # wait 4157432 00:08:41.440 00:08:41.440 real 0m1.664s 00:08:41.440 user 0m1.678s 00:08:41.440 sys 0m0.515s 00:08:41.440 12:25:36 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.440 12:25:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 ************************************ 00:08:41.440 END TEST accel_rpc 00:08:41.440 ************************************ 00:08:41.440 12:25:36 -- common/autotest_common.sh@1142 -- # return 0 00:08:41.440 12:25:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:41.440 12:25:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.440 12:25:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.440 12:25:36 -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 ************************************ 00:08:41.440 START TEST app_cmdline 00:08:41.440 ************************************ 00:08:41.440 12:25:36 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:41.440 * Looking for test storage... 00:08:41.440 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:41.440 12:25:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:41.699 12:25:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:41.699 12:25:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4157690 00:08:41.699 12:25:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4157690 00:08:41.699 12:25:36 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4157690 ']' 00:08:41.699 12:25:36 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.699 12:25:36 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.699 12:25:36 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.699 12:25:36 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.699 12:25:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:41.699 [2024-07-15 12:25:36.584684] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:41.699 [2024-07-15 12:25:36.584742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157690 ] 00:08:41.699 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.699 [2024-07-15 12:25:36.656233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.699 [2024-07-15 12:25:36.745794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:42.633 { 00:08:42.633 "version": "SPDK v24.09-pre git sha1 dff473c1d", 00:08:42.633 "fields": { 00:08:42.633 "major": 24, 00:08:42.633 "minor": 9, 00:08:42.633 "patch": 0, 00:08:42.633 "suffix": "-pre", 00:08:42.633 "commit": "dff473c1d" 00:08:42.633 } 00:08:42.633 } 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:42.633 12:25:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:42.633 12:25:37 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.892 request: 00:08:42.892 { 00:08:42.892 "method": "env_dpdk_get_mem_stats", 00:08:42.892 "req_id": 1 00:08:42.892 } 00:08:42.892 Got JSON-RPC error response 00:08:42.892 response: 00:08:42.892 { 00:08:42.892 "code": -32601, 00:08:42.892 "message": "Method not found" 00:08:42.892 } 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.892 12:25:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4157690 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4157690 ']' 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4157690 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4157690 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4157690' 00:08:42.892 killing process with pid 4157690 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 4157690 00:08:42.892 12:25:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 4157690 00:08:43.150 00:08:43.150 real 0m1.716s 00:08:43.150 user 0m1.969s 00:08:43.150 sys 0m0.493s 00:08:43.150 12:25:38 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.150 12:25:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.150 ************************************ 00:08:43.150 END TEST app_cmdline 00:08:43.150 ************************************ 00:08:43.150 12:25:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:43.150 12:25:38 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:43.150 12:25:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.150 12:25:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.150 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.150 ************************************ 00:08:43.150 START TEST version 00:08:43.150 ************************************ 00:08:43.150 12:25:38 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:43.408 * Looking for test storage... 00:08:43.408 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:43.408 12:25:38 version -- app/version.sh@17 -- # get_header_version major 00:08:43.408 12:25:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # cut -f2 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.408 12:25:38 version -- app/version.sh@17 -- # major=24 00:08:43.408 12:25:38 version -- app/version.sh@18 -- # get_header_version minor 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # cut -f2 00:08:43.408 12:25:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.408 12:25:38 version -- app/version.sh@18 -- # minor=9 00:08:43.408 12:25:38 version -- app/version.sh@19 -- # get_header_version patch 00:08:43.408 12:25:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # cut -f2 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.408 12:25:38 version -- app/version.sh@19 -- # patch=0 00:08:43.408 12:25:38 version -- app/version.sh@20 -- # get_header_version suffix 00:08:43.408 12:25:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:43.408 12:25:38 version -- app/version.sh@14 -- # cut -f2 00:08:43.408 12:25:38 version -- app/version.sh@20 -- # suffix=-pre 00:08:43.408 12:25:38 version -- app/version.sh@22 -- # version=24.9 00:08:43.408 12:25:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:43.408 12:25:38 version -- app/version.sh@28 -- # version=24.9rc0 00:08:43.408 12:25:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:43.408 12:25:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:43.408 12:25:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:43.408 12:25:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:43.408 00:08:43.408 real 0m0.186s 00:08:43.408 user 0m0.093s 00:08:43.408 sys 0m0.132s 00:08:43.408 12:25:38 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.408 12:25:38 version -- common/autotest_common.sh@10 -- # set +x 00:08:43.408 ************************************ 00:08:43.408 END TEST version 00:08:43.408 ************************************ 00:08:43.408 12:25:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:43.408 12:25:38 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@198 -- # uname -s 00:08:43.408 12:25:38 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:43.408 12:25:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.408 12:25:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.408 12:25:38 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:43.408 12:25:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.408 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.408 12:25:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:08:43.408 12:25:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:08:43.408 12:25:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:08:43.408 12:25:38 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:08:43.408 12:25:38 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:43.408 12:25:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.408 12:25:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.408 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:08:43.667 ************************************ 00:08:43.667 START TEST llvm_fuzz 00:08:43.667 ************************************ 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:43.667 * Looking for test storage... 00:08:43.667 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:43.667 12:25:38 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.667 12:25:38 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:43.667 ************************************ 00:08:43.667 START TEST nvmf_llvm_fuzz 00:08:43.667 ************************************ 00:08:43.667 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:43.667 * Looking for test storage... 00:08:43.667 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:43.667 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:43.667 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:43.928 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:43.929 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:43.929 #define SPDK_CONFIG_H 00:08:43.929 #define SPDK_CONFIG_APPS 1 00:08:43.929 #define SPDK_CONFIG_ARCH native 00:08:43.929 #undef SPDK_CONFIG_ASAN 00:08:43.929 #undef SPDK_CONFIG_AVAHI 00:08:43.929 #undef SPDK_CONFIG_CET 00:08:43.929 #define SPDK_CONFIG_COVERAGE 1 00:08:43.929 #define SPDK_CONFIG_CROSS_PREFIX 00:08:43.929 #undef SPDK_CONFIG_CRYPTO 00:08:43.929 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:43.929 #undef SPDK_CONFIG_CUSTOMOCF 00:08:43.929 #undef SPDK_CONFIG_DAOS 00:08:43.929 #define SPDK_CONFIG_DAOS_DIR 00:08:43.929 #define SPDK_CONFIG_DEBUG 1 00:08:43.929 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:43.929 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:43.929 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:43.929 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:43.929 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:43.929 #undef SPDK_CONFIG_DPDK_UADK 00:08:43.929 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:43.929 #define SPDK_CONFIG_EXAMPLES 1 00:08:43.929 #undef SPDK_CONFIG_FC 00:08:43.929 #define SPDK_CONFIG_FC_PATH 00:08:43.929 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:43.929 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:43.929 #undef SPDK_CONFIG_FUSE 00:08:43.929 #define SPDK_CONFIG_FUZZER 1 00:08:43.929 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:43.929 #undef SPDK_CONFIG_GOLANG 00:08:43.929 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:43.929 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:43.929 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:43.929 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:43.929 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:43.929 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:43.929 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:43.929 #define SPDK_CONFIG_IDXD 1 00:08:43.929 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:43.929 #undef SPDK_CONFIG_IPSEC_MB 00:08:43.929 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:43.929 #define SPDK_CONFIG_ISAL 1 00:08:43.929 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:43.929 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:43.929 #define SPDK_CONFIG_LIBDIR 00:08:43.929 #undef SPDK_CONFIG_LTO 00:08:43.929 #define SPDK_CONFIG_MAX_LCORES 128 00:08:43.929 #define SPDK_CONFIG_NVME_CUSE 1 00:08:43.929 #undef SPDK_CONFIG_OCF 00:08:43.929 #define SPDK_CONFIG_OCF_PATH 00:08:43.929 #define SPDK_CONFIG_OPENSSL_PATH 00:08:43.929 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:43.929 #define SPDK_CONFIG_PGO_DIR 00:08:43.929 #undef SPDK_CONFIG_PGO_USE 00:08:43.929 #define SPDK_CONFIG_PREFIX /usr/local 00:08:43.929 #undef SPDK_CONFIG_RAID5F 00:08:43.929 #undef SPDK_CONFIG_RBD 00:08:43.929 #define SPDK_CONFIG_RDMA 1 00:08:43.929 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:43.929 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:43.929 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:43.929 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:43.929 #undef SPDK_CONFIG_SHARED 00:08:43.929 #undef SPDK_CONFIG_SMA 00:08:43.929 #define SPDK_CONFIG_TESTS 1 00:08:43.929 #undef SPDK_CONFIG_TSAN 00:08:43.929 #define SPDK_CONFIG_UBLK 1 00:08:43.929 #define SPDK_CONFIG_UBSAN 1 00:08:43.929 #undef SPDK_CONFIG_UNIT_TESTS 00:08:43.929 #undef SPDK_CONFIG_URING 00:08:43.929 #define SPDK_CONFIG_URING_PATH 00:08:43.929 #undef SPDK_CONFIG_URING_ZNS 00:08:43.929 #undef SPDK_CONFIG_USDT 00:08:43.929 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:43.929 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:43.929 #define SPDK_CONFIG_VFIO_USER 1 00:08:43.929 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:43.929 #define SPDK_CONFIG_VHOST 1 00:08:43.929 #define SPDK_CONFIG_VIRTIO 1 00:08:43.929 #undef SPDK_CONFIG_VTUNE 00:08:43.929 #define SPDK_CONFIG_VTUNE_DIR 00:08:43.929 #define SPDK_CONFIG_WERROR 1 00:08:43.929 #define SPDK_CONFIG_WPDK_DIR 00:08:43.930 #undef SPDK_CONFIG_XNVME 00:08:43.930 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:43.930 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:43.931 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 4158193 ]] 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 4158193 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.4aOteZ 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.4aOteZ/tests/nvmf /tmp/spdk.4aOteZ 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=87064842240 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508576768 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7443734528 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895826944 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5890048 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253770240 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254290432 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=520192 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:43.932 * Looking for test storage... 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=87064842240 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:43.932 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9658327040 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:43.933 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:43.933 12:25:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:43.933 [2024-07-15 12:25:39.002728] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:43.933 [2024-07-15 12:25:39.002815] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158233 ] 00:08:43.933 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.191 [2024-07-15 12:25:39.204299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.191 [2024-07-15 12:25:39.277174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.450 [2024-07-15 12:25:39.337023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.450 [2024-07-15 12:25:39.353227] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:44.450 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.450 INFO: Seed: 2462997035 00:08:44.450 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:08:44.450 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:08:44.450 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:44.450 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.450 #2 INITED exec/s: 0 rss: 64Mb 00:08:44.450 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.450 This may also happen if the target rejected all inputs we tried so far 00:08:44.450 [2024-07-15 12:25:39.398436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.450 [2024-07-15 12:25:39.398466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.709 NEW_FUNC[1/696]: 0x483e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:44.709 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:44.709 #28 NEW cov: 11861 ft: 11852 corp: 2/108b lim: 320 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 InsertRepeatedBytes- 00:08:44.709 [2024-07-15 12:25:39.739358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.709 [2024-07-15 12:25:39.739400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.709 #29 NEW cov: 11993 ft: 12319 corp: 3/215b lim: 320 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 ChangeBit- 00:08:44.709 [2024-07-15 12:25:39.789391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.709 [2024-07-15 12:25:39.789419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.709 #37 NEW cov: 11999 ft: 12704 corp: 4/282b lim: 320 exec/s: 0 rss: 72Mb L: 67/107 MS: 3 CopyPart-ChangeBit-CrossOver- 00:08:44.709 [2024-07-15 12:25:39.829534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:4 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:44.709 [2024-07-15 12:25:39.829560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.968 #43 NEW cov: 12103 ft: 12932 corp: 5/366b lim: 320 exec/s: 0 rss: 72Mb L: 84/107 MS: 1 InsertRepeatedBytes- 00:08:44.968 [2024-07-15 12:25:39.869615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.968 [2024-07-15 12:25:39.869641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.968 #44 NEW cov: 12103 ft: 12978 corp: 6/434b lim: 320 exec/s: 0 rss: 72Mb L: 68/107 MS: 1 InsertByte- 00:08:44.968 [2024-07-15 12:25:39.919725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.968 [2024-07-15 12:25:39.919754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.968 #45 NEW cov: 12103 ft: 13058 corp: 7/541b lim: 320 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 ShuffleBytes- 00:08:44.968 [2024-07-15 12:25:39.969861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.968 [2024-07-15 12:25:39.969887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.968 #51 NEW cov: 12103 ft: 13173 corp: 8/639b lim: 320 exec/s: 0 rss: 72Mb L: 98/107 MS: 1 CrossOver- 00:08:44.968 [2024-07-15 12:25:40.020044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.968 [2024-07-15 12:25:40.020072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.968 #52 NEW cov: 12103 ft: 13291 corp: 9/746b lim: 320 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 ChangeBinInt- 00:08:44.968 [2024-07-15 12:25:40.060138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:44.968 [2024-07-15 12:25:40.060170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 #53 NEW cov: 12103 ft: 13387 corp: 10/827b lim: 320 exec/s: 0 rss: 72Mb L: 81/107 MS: 1 EraseBytes- 00:08:45.227 [2024-07-15 12:25:40.110293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.227 [2024-07-15 12:25:40.110326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 #64 NEW cov: 12103 ft: 13421 corp: 11/934b lim: 320 exec/s: 0 rss: 72Mb L: 107/107 MS: 1 ShuffleBytes- 00:08:45.227 [2024-07-15 12:25:40.150567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:4 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:45.227 [2024-07-15 12:25:40.150594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 [2024-07-15 12:25:40.150654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:45.227 [2024-07-15 12:25:40.150668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.227 #65 NEW cov: 12103 ft: 13819 corp: 12/1075b lim: 320 exec/s: 0 rss: 72Mb L: 141/141 MS: 1 CopyPart- 00:08:45.227 [2024-07-15 12:25:40.200539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.227 [2024-07-15 12:25:40.200565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 #66 NEW cov: 12103 ft: 13908 corp: 13/1162b lim: 320 exec/s: 0 rss: 73Mb L: 87/141 MS: 1 EraseBytes- 00:08:45.227 [2024-07-15 12:25:40.240631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.227 [2024-07-15 12:25:40.240657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 #72 NEW cov: 12103 ft: 14037 corp: 14/1269b lim: 320 exec/s: 0 rss: 73Mb L: 107/141 MS: 1 CrossOver- 00:08:45.227 [2024-07-15 12:25:40.290792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffff0000 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.227 [2024-07-15 12:25:40.290824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:45.227 #73 NEW cov: 12126 ft: 14097 corp: 15/1337b lim: 320 exec/s: 0 rss: 73Mb L: 68/141 MS: 1 ChangeBinInt- 00:08:45.227 [2024-07-15 12:25:40.341080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:4 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:45.227 [2024-07-15 12:25:40.341106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.227 [2024-07-15 12:25:40.341165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:45.227 [2024-07-15 12:25:40.341180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.486 #74 NEW cov: 12126 ft: 14114 corp: 16/1478b lim: 320 exec/s: 0 rss: 73Mb L: 141/141 MS: 1 CopyPart- 00:08:45.486 [2024-07-15 12:25:40.391115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.486 [2024-07-15 12:25:40.391140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.486 #75 NEW cov: 12126 ft: 14134 corp: 17/1602b lim: 320 exec/s: 75 rss: 73Mb L: 124/141 MS: 1 CopyPart- 00:08:45.486 [2024-07-15 12:25:40.431190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.486 [2024-07-15 12:25:40.431215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.486 #76 NEW cov: 12126 ft: 14152 corp: 18/1726b lim: 320 exec/s: 76 rss: 73Mb L: 124/141 MS: 1 CrossOver- 00:08:45.486 [2024-07-15 12:25:40.471310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:32ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.486 [2024-07-15 12:25:40.471336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.486 #77 NEW cov: 12126 ft: 14154 corp: 19/1807b lim: 320 exec/s: 77 rss: 73Mb L: 81/141 MS: 1 ChangeByte- 00:08:45.486 [2024-07-15 12:25:40.521418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.486 [2024-07-15 12:25:40.521445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.486 #78 NEW cov: 12126 ft: 14168 corp: 20/1914b lim: 320 exec/s: 78 rss: 73Mb L: 107/141 MS: 1 ChangeByte- 00:08:45.486 [2024-07-15 12:25:40.571565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.486 [2024-07-15 12:25:40.571591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.486 #79 NEW cov: 12126 ft: 14207 corp: 21/2021b lim: 320 exec/s: 79 rss: 73Mb L: 107/141 MS: 1 ChangeBit- 00:08:45.486 [2024-07-15 12:25:40.611643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffff7eff 00:08:45.486 [2024-07-15 12:25:40.611669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.745 #80 NEW cov: 12126 ft: 14212 corp: 22/2128b lim: 320 exec/s: 80 rss: 73Mb L: 107/141 MS: 1 ChangeByte- 00:08:45.745 [2024-07-15 12:25:40.661891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.661917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.745 #81 NEW cov: 12126 ft: 14228 corp: 23/2235b lim: 320 exec/s: 81 rss: 73Mb L: 107/141 MS: 1 ChangeBit- 00:08:45.745 [2024-07-15 12:25:40.702031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.702057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.745 [2024-07-15 12:25:40.702116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ff32ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.702130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.745 #82 NEW cov: 12126 ft: 14299 corp: 24/2415b lim: 320 exec/s: 82 rss: 73Mb L: 180/180 MS: 1 InsertRepeatedBytes- 00:08:45.745 [2024-07-15 12:25:40.752080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.752106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.745 #83 NEW cov: 12126 ft: 14327 corp: 25/2522b lim: 320 exec/s: 83 rss: 73Mb L: 107/180 MS: 1 CrossOver- 00:08:45.745 [2024-07-15 12:25:40.792365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.792390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.745 [2024-07-15 12:25:40.792449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.792463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.745 [2024-07-15 12:25:40.792519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.792539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.745 #84 NEW cov: 12126 ft: 14525 corp: 26/2732b lim: 320 exec/s: 84 rss: 73Mb L: 210/210 MS: 1 CrossOver- 00:08:45.745 [2024-07-15 12:25:40.832352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.832378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.745 #85 NEW cov: 12143 ft: 14571 corp: 27/2811b lim: 320 exec/s: 85 rss: 73Mb L: 79/210 MS: 1 CrossOver- 00:08:45.745 [2024-07-15 12:25:40.872420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:45.745 [2024-07-15 12:25:40.872446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.005 #86 NEW cov: 12143 ft: 14583 corp: 28/2918b lim: 320 exec/s: 86 rss: 73Mb L: 107/210 MS: 1 ChangeByte- 00:08:46.005 [2024-07-15 12:25:40.922610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.005 [2024-07-15 12:25:40.922641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.005 #87 NEW cov: 12143 ft: 14630 corp: 29/3025b lim: 320 exec/s: 87 rss: 73Mb L: 107/210 MS: 1 ShuffleBytes- 00:08:46.005 [2024-07-15 12:25:40.972740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.005 [2024-07-15 12:25:40.972769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.005 #93 NEW cov: 12143 ft: 14639 corp: 30/3132b lim: 320 exec/s: 93 rss: 74Mb L: 107/210 MS: 1 ChangeBinInt- 00:08:46.005 [2024-07-15 12:25:41.012928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.005 [2024-07-15 12:25:41.012955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.005 #94 NEW cov: 12143 ft: 14710 corp: 31/3239b lim: 320 exec/s: 94 rss: 74Mb L: 107/210 MS: 1 ChangeBit- 00:08:46.005 [2024-07-15 12:25:41.053100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:4 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:46.005 [2024-07-15 12:25:41.053128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.005 [2024-07-15 12:25:41.053187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NVME-MI RECEIVE (1e) qid:0 cid:5 nsid:1e1e1e1e cdw10:1e1e1e1e cdw11:1e1e1e1e SGL TRANSPORT DATA BLOCK TRANSPORT 0x1e1e1e1e1e1e1e1e 00:08:46.005 [2024-07-15 12:25:41.053201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.005 #95 NEW cov: 12143 ft: 14712 corp: 32/3380b lim: 320 exec/s: 95 rss: 74Mb L: 141/210 MS: 1 ShuffleBytes- 00:08:46.005 [2024-07-15 12:25:41.093088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.005 [2024-07-15 12:25:41.093115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.005 #96 NEW cov: 12143 ft: 14722 corp: 33/3487b lim: 320 exec/s: 96 rss: 74Mb L: 107/210 MS: 1 ChangeBinInt- 00:08:46.005 [2024-07-15 12:25:41.133206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2a) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.005 [2024-07-15 12:25:41.133232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.264 #97 NEW cov: 12143 ft: 14726 corp: 34/3586b lim: 320 exec/s: 97 rss: 74Mb L: 99/210 MS: 1 InsertByte- 00:08:46.264 [2024-07-15 12:25:41.173303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.264 [2024-07-15 12:25:41.173328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.264 #98 NEW cov: 12143 ft: 14733 corp: 35/3704b lim: 320 exec/s: 98 rss: 74Mb L: 118/210 MS: 1 InsertRepeatedBytes- 00:08:46.264 [2024-07-15 12:25:41.213622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:32ffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.264 [2024-07-15 12:25:41.213648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.264 [2024-07-15 12:25:41.213706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:eaeaeaea cdw10:eaeaeaea cdw11:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:08:46.264 [2024-07-15 12:25:41.213724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.264 [2024-07-15 12:25:41.213781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ea) qid:0 cid:6 nsid:eaeaeaea cdw10:eaeaeaea cdw11:eaeaeaea SGL TRANSPORT DATA BLOCK TRANSPORT 0xeaeaeaeaeaeaeaea 00:08:46.264 [2024-07-15 12:25:41.213794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.264 #99 NEW cov: 12143 ft: 14748 corp: 36/3901b lim: 320 exec/s: 99 rss: 74Mb L: 197/210 MS: 1 InsertRepeatedBytes- 00:08:46.264 [2024-07-15 12:25:41.253517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.264 [2024-07-15 12:25:41.253548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.264 #100 NEW cov: 12143 ft: 14882 corp: 37/3988b lim: 320 exec/s: 100 rss: 74Mb L: 87/210 MS: 1 CopyPart- 00:08:46.264 [2024-07-15 12:25:41.303706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.264 [2024-07-15 12:25:41.303732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.264 #101 NEW cov: 12143 ft: 14929 corp: 38/4075b lim: 320 exec/s: 101 rss: 74Mb L: 87/210 MS: 1 CrossOver- 00:08:46.264 [2024-07-15 12:25:41.353827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffff01000000 00:08:46.264 [2024-07-15 12:25:41.353853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.264 #102 NEW cov: 12143 ft: 14939 corp: 39/4182b lim: 320 exec/s: 102 rss: 74Mb L: 107/210 MS: 1 ChangeBinInt- 00:08:46.524 [2024-07-15 12:25:41.393932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:0040ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:46.524 [2024-07-15 12:25:41.393958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.524 #103 NEW cov: 12143 ft: 15010 corp: 40/4291b lim: 320 exec/s: 51 rss: 74Mb L: 109/210 MS: 1 CMP- DE: "@\000"- 00:08:46.524 #103 DONE cov: 12143 ft: 15010 corp: 40/4291b lim: 320 exec/s: 51 rss: 74Mb 00:08:46.524 ###### Recommended dictionary. ###### 00:08:46.524 "@\000" # Uses: 0 00:08:46.524 ###### End of recommended dictionary. ###### 00:08:46.524 Done 103 runs in 2 second(s) 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:46.524 12:25:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:46.524 [2024-07-15 12:25:41.599905] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:46.524 [2024-07-15 12:25:41.599966] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158594 ] 00:08:46.524 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.784 [2024-07-15 12:25:41.796972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.784 [2024-07-15 12:25:41.869042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.043 [2024-07-15 12:25:41.928510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.043 [2024-07-15 12:25:41.944718] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:47.043 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.043 INFO: Seed: 761014169 00:08:47.043 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:08:47.043 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:08:47.043 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:47.043 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.043 #2 INITED exec/s: 0 rss: 65Mb 00:08:47.043 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.043 This may also happen if the target rejected all inputs we tried so far 00:08:47.043 [2024-07-15 12:25:41.999859] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:47.043 [2024-07-15 12:25:42.000160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.043 [2024-07-15 12:25:42.000191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.043 [2024-07-15 12:25:42.000246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.043 [2024-07-15 12:25:42.000262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.370 NEW_FUNC[1/696]: 0x484780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:47.370 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:47.370 #4 NEW cov: 11963 ft: 11963 corp: 2/13b lim: 30 exec/s: 0 rss: 72Mb L: 12/12 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:47.370 [2024-07-15 12:25:42.340707] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:47.370 [2024-07-15 12:25:42.340940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.370 [2024-07-15 12:25:42.340983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.370 #15 NEW cov: 12093 ft: 13041 corp: 3/20b lim: 30 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 EraseBytes- 00:08:47.370 [2024-07-15 12:25:42.400729] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:08:47.370 [2024-07-15 12:25:42.400939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.370 [2024-07-15 12:25:42.400968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.370 #21 NEW cov: 12105 ft: 13187 corp: 4/27b lim: 30 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 ChangeBinInt- 00:08:47.370 [2024-07-15 12:25:42.450887] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:47.370 [2024-07-15 12:25:42.451091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.370 [2024-07-15 12:25:42.451117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.370 #27 NEW cov: 12190 ft: 13475 corp: 5/34b lim: 30 exec/s: 0 rss: 72Mb L: 7/12 MS: 1 ChangeByte- 00:08:47.370 [2024-07-15 12:25:42.491286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.370 [2024-07-15 12:25:42.491314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.630 #28 NEW cov: 12190 ft: 13601 corp: 6/42b lim: 30 exec/s: 0 rss: 72Mb L: 8/12 MS: 1 CopyPart- 00:08:47.630 [2024-07-15 12:25:42.531262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.531288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.630 #29 NEW cov: 12190 ft: 13701 corp: 7/51b lim: 30 exec/s: 0 rss: 72Mb L: 9/12 MS: 1 CopyPart- 00:08:47.630 [2024-07-15 12:25:42.581306] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:47.630 [2024-07-15 12:25:42.581700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.581726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.581778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.581793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.581844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.581858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.630 #30 NEW cov: 12190 ft: 14086 corp: 8/71b lim: 30 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:47.630 [2024-07-15 12:25:42.621413] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.630 [2024-07-15 12:25:42.621534] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.630 [2024-07-15 12:25:42.621635] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.630 [2024-07-15 12:25:42.621739] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000170a 00:08:47.630 [2024-07-15 12:25:42.621961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.621989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.622040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.622055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.622107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.622120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.622170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.622185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.630 #32 NEW cov: 12190 ft: 14602 corp: 9/96b lim: 30 exec/s: 0 rss: 72Mb L: 25/25 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:47.630 [2024-07-15 12:25:42.661535] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.630 [2024-07-15 12:25:42.661644] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.630 [2024-07-15 12:25:42.661746] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.630 [2024-07-15 12:25:42.661844] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000170a 00:08:47.630 [2024-07-15 12:25:42.662037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.662064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.662119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17ef83e8 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.662135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.662185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.662200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.630 [2024-07-15 12:25:42.662249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.630 [2024-07-15 12:25:42.662263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.630 #33 NEW cov: 12190 ft: 14619 corp: 10/121b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:08:47.630 [2024-07-15 12:25:42.711621] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:47.630 [2024-07-15 12:25:42.711821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.631 [2024-07-15 12:25:42.711846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.631 #34 NEW cov: 12190 ft: 14672 corp: 11/132b lim: 30 exec/s: 0 rss: 73Mb L: 11/25 MS: 1 EraseBytes- 00:08:47.631 [2024-07-15 12:25:42.751669] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:47.631 [2024-07-15 12:25:42.751879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000080 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.631 [2024-07-15 12:25:42.751906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.890 #35 NEW cov: 12190 ft: 14707 corp: 12/143b lim: 30 exec/s: 0 rss: 73Mb L: 11/25 MS: 1 ChangeBit- 00:08:47.890 [2024-07-15 12:25:42.801866] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:08:47.890 [2024-07-15 12:25:42.802179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.802204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.802257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.802272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.890 #36 NEW cov: 12190 ft: 14724 corp: 13/156b lim: 30 exec/s: 0 rss: 73Mb L: 13/25 MS: 1 InsertByte- 00:08:47.890 [2024-07-15 12:25:42.841953] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (16128) > len (4) 00:08:47.890 [2024-07-15 12:25:42.842157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.842181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.890 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:47.890 #42 NEW cov: 12219 ft: 14775 corp: 14/165b lim: 30 exec/s: 0 rss: 73Mb L: 9/25 MS: 1 ChangeByte- 00:08:47.890 [2024-07-15 12:25:42.892195] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.890 [2024-07-15 12:25:42.892308] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.890 [2024-07-15 12:25:42.892411] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.890 [2024-07-15 12:25:42.892514] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000170a 00:08:47.890 [2024-07-15 12:25:42.892724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.892750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.892803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.892818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.892866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:171f8317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.892880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.892929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.892944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.890 #43 NEW cov: 12219 ft: 14858 corp: 15/190b lim: 30 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:08:47.890 [2024-07-15 12:25:42.932197] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:08:47.890 [2024-07-15 12:25:42.932398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.932426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.890 #49 NEW cov: 12219 ft: 14900 corp: 16/197b lim: 30 exec/s: 0 rss: 73Mb L: 7/25 MS: 1 ChangeByte- 00:08:47.890 [2024-07-15 12:25:42.982426] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.890 [2024-07-15 12:25:42.982546] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1717 00:08:47.890 [2024-07-15 12:25:42.982648] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.890 [2024-07-15 12:25:42.982750] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:47.890 [2024-07-15 12:25:42.982945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.982970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.983022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:173200ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.983038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.983092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.983106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.890 [2024-07-15 12:25:42.983156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.890 [2024-07-15 12:25:42.983171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.149 #50 NEW cov: 12219 ft: 14919 corp: 17/223b lim: 30 exec/s: 50 rss: 73Mb L: 26/26 MS: 1 InsertByte- 00:08:48.149 [2024-07-15 12:25:43.032538] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:48.150 [2024-07-15 12:25:43.032837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.032863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.150 [2024-07-15 12:25:43.032916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.032931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.150 #51 NEW cov: 12219 ft: 14930 corp: 18/235b lim: 30 exec/s: 51 rss: 73Mb L: 12/26 MS: 1 ChangeBit- 00:08:48.150 [2024-07-15 12:25:43.072598] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfb 00:08:48.150 [2024-07-15 12:25:43.072804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.072829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.150 #52 NEW cov: 12219 ft: 14948 corp: 19/246b lim: 30 exec/s: 52 rss: 73Mb L: 11/26 MS: 1 ChangeBinInt- 00:08:48.150 [2024-07-15 12:25:43.112749] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (62508) > buf size (4096) 00:08:48.150 [2024-07-15 12:25:43.112860] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1044480) > buf size (4096) 00:08:48.150 [2024-07-15 12:25:43.113063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.113092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.150 [2024-07-15 12:25:43.113146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:fbff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.113161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.150 #53 NEW cov: 12219 ft: 14980 corp: 20/258b lim: 30 exec/s: 53 rss: 73Mb L: 12/26 MS: 1 InsertByte- 00:08:48.150 [2024-07-15 12:25:43.162916] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2d 00:08:48.150 [2024-07-15 12:25:43.163032] ctrlr.c:2678:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (8192) > len (4) 00:08:48.150 [2024-07-15 12:25:43.163235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.163259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.150 [2024-07-15 12:25:43.163313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.163329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.150 #54 NEW cov: 12219 ft: 15039 corp: 21/271b lim: 30 exec/s: 54 rss: 73Mb L: 13/26 MS: 1 InsertByte- 00:08:48.150 [2024-07-15 12:25:43.213004] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (62508) > buf size (4096) 00:08:48.150 [2024-07-15 12:25:43.213119] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1044216) > buf size (4096) 00:08:48.150 [2024-07-15 12:25:43.213318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.213344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.150 [2024-07-15 12:25:43.213396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:fbbd83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.213413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.150 #55 NEW cov: 12219 ft: 15046 corp: 22/284b lim: 30 exec/s: 55 rss: 73Mb L: 13/26 MS: 1 InsertByte- 00:08:48.150 [2024-07-15 12:25:43.263130] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:48.150 [2024-07-15 12:25:43.263429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.263454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.150 [2024-07-15 12:25:43.263507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.150 [2024-07-15 12:25:43.263522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 #56 NEW cov: 12219 ft: 15064 corp: 23/296b lim: 30 exec/s: 56 rss: 73Mb L: 12/26 MS: 1 ShuffleBytes- 00:08:48.409 [2024-07-15 12:25:43.303346] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.303459] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1717 00:08:48.409 [2024-07-15 12:25:43.303568] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001f17 00:08:48.409 [2024-07-15 12:25:43.303669] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.303870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.303898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.303951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:173200ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.303966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.304018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.304032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.304084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.304099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.409 #57 NEW cov: 12219 ft: 15073 corp: 24/322b lim: 30 exec/s: 57 rss: 73Mb L: 26/26 MS: 1 ChangeBit- 00:08:48.409 [2024-07-15 12:25:43.353440] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.353564] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.353666] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.353862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.353888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.353942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.353958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.354009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.354023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.409 #58 NEW cov: 12219 ft: 15114 corp: 25/343b lim: 30 exec/s: 58 rss: 73Mb L: 21/26 MS: 1 EraseBytes- 00:08:48.409 [2024-07-15 12:25:43.403807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0040000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.403834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.403886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.403900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 #62 NEW cov: 12219 ft: 15121 corp: 26/359b lim: 30 exec/s: 62 rss: 73Mb L: 16/26 MS: 4 EraseBytes-EraseBytes-ChangeByte-CrossOver- 00:08:48.409 [2024-07-15 12:25:43.443648] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534532) > buf size (4096) 00:08:48.409 [2024-07-15 12:25:43.443949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.443975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.444032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.444048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 #63 NEW cov: 12219 ft: 15163 corp: 27/372b lim: 30 exec/s: 63 rss: 73Mb L: 13/26 MS: 1 CrossOver- 00:08:48.409 [2024-07-15 12:25:43.493860] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.493972] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.494074] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.494175] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.409 [2024-07-15 12:25:43.494383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.494409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.494461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.494476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.494533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.494549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.494601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.494615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.409 #64 NEW cov: 12219 ft: 15226 corp: 28/401b lim: 30 exec/s: 64 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:08:48.409 [2024-07-15 12:25:43.533979] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:48.409 [2024-07-15 12:25:43.534278] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:08:48.409 [2024-07-15 12:25:43.534486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.534512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.534571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.534587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.534640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.534655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.409 [2024-07-15 12:25:43.534709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.409 [2024-07-15 12:25:43.534724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.668 #65 NEW cov: 12219 ft: 15238 corp: 29/426b lim: 30 exec/s: 65 rss: 73Mb L: 25/29 MS: 1 InsertRepeatedBytes- 00:08:48.668 [2024-07-15 12:25:43.574010] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xfff9 00:08:48.668 [2024-07-15 12:25:43.574212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.574246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.668 #66 NEW cov: 12219 ft: 15247 corp: 30/433b lim: 30 exec/s: 66 rss: 73Mb L: 7/29 MS: 1 ChangeBinInt- 00:08:48.668 [2024-07-15 12:25:43.614184] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:48.668 [2024-07-15 12:25:43.614596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.614621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.614675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.614690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.614742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.614756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.668 #67 NEW cov: 12219 ft: 15258 corp: 31/453b lim: 30 exec/s: 67 rss: 74Mb L: 20/29 MS: 1 EraseBytes- 00:08:48.668 [2024-07-15 12:25:43.664295] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:48.668 [2024-07-15 12:25:43.664608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.664632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.664685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.664699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.668 #68 NEW cov: 12219 ft: 15260 corp: 32/465b lim: 30 exec/s: 68 rss: 74Mb L: 12/29 MS: 1 ShuffleBytes- 00:08:48.668 [2024-07-15 12:25:43.714475] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:08:48.668 [2024-07-15 12:25:43.714795] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:08:48.668 [2024-07-15 12:25:43.715003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.715029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.715083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.715098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.715151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.715166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.715221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.715236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.668 #69 NEW cov: 12219 ft: 15312 corp: 33/489b lim: 30 exec/s: 69 rss: 74Mb L: 24/29 MS: 1 InsertRepeatedBytes- 00:08:48.668 [2024-07-15 12:25:43.764636] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.668 [2024-07-15 12:25:43.764750] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (23756) > buf size (4096) 00:08:48.668 [2024-07-15 12:25:43.764945] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.668 [2024-07-15 12:25:43.765142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.765168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.765221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:173200ef cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.765236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.765287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.765301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.668 [2024-07-15 12:25:43.765352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.668 [2024-07-15 12:25:43.765366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.668 #70 NEW cov: 12219 ft: 15328 corp: 34/515b lim: 30 exec/s: 70 rss: 74Mb L: 26/29 MS: 1 CrossOver- 00:08:48.927 [2024-07-15 12:25:43.804666] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (41124) > buf size (4096) 00:08:48.927 [2024-07-15 12:25:43.804876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:28280028 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.804901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.927 #72 NEW cov: 12219 ft: 15391 corp: 35/525b lim: 30 exec/s: 72 rss: 74Mb L: 10/29 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:48.927 [2024-07-15 12:25:43.844833] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:48.927 [2024-07-15 12:25:43.844953] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (790796) > buf size (4096) 00:08:48.927 [2024-07-15 12:25:43.845160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3df683ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.845186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.927 [2024-07-15 12:25:43.845239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:04428309 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.845254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.927 #73 NEW cov: 12219 ft: 15421 corp: 36/538b lim: 30 exec/s: 73 rss: 74Mb L: 13/29 MS: 1 ChangeBinInt- 00:08:48.927 [2024-07-15 12:25:43.894957] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:48.927 [2024-07-15 12:25:43.895071] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (790796) > buf size (4096) 00:08:48.927 [2024-07-15 12:25:43.895282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3df683ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.895308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.927 [2024-07-15 12:25:43.895360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:04428309 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.895376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.927 #74 NEW cov: 12219 ft: 15440 corp: 37/550b lim: 30 exec/s: 74 rss: 74Mb L: 12/29 MS: 1 EraseBytes- 00:08:48.927 [2024-07-15 12:25:43.945103] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000004ff 00:08:48.927 [2024-07-15 12:25:43.945218] ctrlr.c:2647:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796672) > buf size (4096) 00:08:48.927 [2024-07-15 12:25:43.945411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3df68342 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.945435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.927 [2024-07-15 12:25:43.945486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:09ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.945501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.927 #75 NEW cov: 12219 ft: 15459 corp: 38/563b lim: 30 exec/s: 75 rss: 74Mb L: 13/29 MS: 1 ShuffleBytes- 00:08:48.927 [2024-07-15 12:25:43.985205] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200001717 00:08:48.927 [2024-07-15 12:25:43.985316] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.927 [2024-07-15 12:25:43.985418] ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300001717 00:08:48.927 [2024-07-15 12:25:43.985616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:17170217 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.985641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.927 [2024-07-15 12:25:43.985694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.985710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.927 [2024-07-15 12:25:43.985760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:17178317 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.927 [2024-07-15 12:25:43.985774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.927 #76 NEW cov: 12219 ft: 15464 corp: 39/584b lim: 30 exec/s: 38 rss: 74Mb L: 21/29 MS: 1 ChangeBinInt- 00:08:48.927 #76 DONE cov: 12219 ft: 15464 corp: 39/584b lim: 30 exec/s: 38 rss: 74Mb 00:08:48.927 ###### Recommended dictionary. ###### 00:08:48.927 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:48.927 ###### End of recommended dictionary. ###### 00:08:48.927 Done 76 runs in 2 second(s) 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:49.186 12:25:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:49.186 [2024-07-15 12:25:44.198661] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:49.186 [2024-07-15 12:25:44.198736] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158950 ] 00:08:49.186 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.445 [2024-07-15 12:25:44.399936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.445 [2024-07-15 12:25:44.472795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.445 [2024-07-15 12:25:44.532221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.445 [2024-07-15 12:25:44.548440] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:49.445 INFO: Running with entropic power schedule (0xFF, 100). 00:08:49.445 INFO: Seed: 3363013727 00:08:49.704 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:08:49.704 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:08:49.704 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:49.704 INFO: A corpus is not provided, starting from an empty corpus 00:08:49.704 #2 INITED exec/s: 0 rss: 64Mb 00:08:49.704 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:49.704 This may also happen if the target rejected all inputs we tried so far 00:08:49.704 [2024-07-15 12:25:44.593820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.704 [2024-07-15 12:25:44.593851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.704 [2024-07-15 12:25:44.593907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.704 [2024-07-15 12:25:44.593922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.963 NEW_FUNC[1/695]: 0x487230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:49.963 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:49.963 #13 NEW cov: 11885 ft: 11885 corp: 2/17b lim: 35 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:49.963 [2024-07-15 12:25:44.934538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:72000041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:44.934578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.963 #17 NEW cov: 12015 ft: 12823 corp: 3/27b lim: 35 exec/s: 0 rss: 72Mb L: 10/16 MS: 4 ChangeByte-InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes- 00:08:49.963 [2024-07-15 12:25:44.974576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:e10089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:44.974605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.963 #18 NEW cov: 12021 ft: 13010 corp: 4/37b lim: 35 exec/s: 0 rss: 72Mb L: 10/16 MS: 1 CMP- DE: "\001'\211\341\341\3324\370"- 00:08:49.963 [2024-07-15 12:25:45.024720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:45.024748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.963 #19 NEW cov: 12106 ft: 13221 corp: 5/48b lim: 35 exec/s: 0 rss: 72Mb L: 11/16 MS: 1 EraseBytes- 00:08:49.963 [2024-07-15 12:25:45.075255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:45.075282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.963 [2024-07-15 12:25:45.075336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:45.075350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.963 [2024-07-15 12:25:45.075401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:45.075415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.963 [2024-07-15 12:25:45.075467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:49.963 [2024-07-15 12:25:45.075480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.222 #20 NEW cov: 12106 ft: 13870 corp: 6/80b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:50.222 [2024-07-15 12:25:45.124988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:72000041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.125015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.222 #21 NEW cov: 12106 ft: 13995 corp: 7/89b lim: 35 exec/s: 0 rss: 72Mb L: 9/32 MS: 1 EraseBytes- 00:08:50.222 [2024-07-15 12:25:45.165504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.165537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.165592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.165610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.165661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.165675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.165728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:41010013 cdw11:e1002789 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.165742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.222 #22 NEW cov: 12106 ft: 14073 corp: 8/119b lim: 35 exec/s: 0 rss: 72Mb L: 30/32 MS: 1 CrossOver- 00:08:50.222 [2024-07-15 12:25:45.215367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.215392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.215444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:89e10027 cdw11:3400e1da SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.215457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.222 #28 NEW cov: 12106 ft: 14112 corp: 9/135b lim: 35 exec/s: 0 rss: 72Mb L: 16/32 MS: 1 PersAutoDict- DE: "\001'\211\341\341\3324\370"- 00:08:50.222 [2024-07-15 12:25:45.255724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.255750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.255804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:011f0013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.255818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.255870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.255885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.255938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:41010013 cdw11:e1002789 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.255951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.222 #29 NEW cov: 12106 ft: 14223 corp: 10/165b lim: 35 exec/s: 0 rss: 72Mb L: 30/32 MS: 1 CMP- DE: "\001\037"- 00:08:50.222 [2024-07-15 12:25:45.305797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.305823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.305875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.305890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.305943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.305960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.222 #30 NEW cov: 12106 ft: 14465 corp: 11/190b lim: 35 exec/s: 0 rss: 73Mb L: 25/32 MS: 1 EraseBytes- 00:08:50.222 [2024-07-15 12:25:45.345999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:13001fe1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.346025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.346080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:011f0013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.346095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.346146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.346162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.222 [2024-07-15 12:25:45.346215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:41010013 cdw11:e1002789 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.222 [2024-07-15 12:25:45.346229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.480 #31 NEW cov: 12106 ft: 14489 corp: 12/220b lim: 35 exec/s: 0 rss: 73Mb L: 30/32 MS: 1 ChangeByte- 00:08:50.480 [2024-07-15 12:25:45.395975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.396002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.480 [2024-07-15 12:25:45.396055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.396070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.480 [2024-07-15 12:25:45.396120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001359 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.396134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.480 #32 NEW cov: 12106 ft: 14520 corp: 13/245b lim: 35 exec/s: 0 rss: 73Mb L: 25/32 MS: 1 ChangeByte- 00:08:50.480 [2024-07-15 12:25:45.446036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.446062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.480 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:50.480 #33 NEW cov: 12129 ft: 14916 corp: 14/264b lim: 35 exec/s: 0 rss: 73Mb L: 19/32 MS: 1 PersAutoDict- DE: "\001'\211\341\341\3324\370"- 00:08:50.480 [2024-07-15 12:25:45.496358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.496384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.480 [2024-07-15 12:25:45.496438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.496452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.480 [2024-07-15 12:25:45.496508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.480 [2024-07-15 12:25:45.496523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.480 [2024-07-15 12:25:45.496577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13130013 cdw11:25001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.481 [2024-07-15 12:25:45.496591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.481 #34 NEW cov: 12129 ft: 14957 corp: 15/297b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertByte- 00:08:50.481 [2024-07-15 12:25:45.536449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.481 [2024-07-15 12:25:45.536474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.481 [2024-07-15 12:25:45.536532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.481 [2024-07-15 12:25:45.536547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.481 [2024-07-15 12:25:45.536609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:ec00edec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.481 [2024-07-15 12:25:45.536622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.481 [2024-07-15 12:25:45.536675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ecec00ec cdw11:2500f513 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.481 [2024-07-15 12:25:45.536688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.481 #35 NEW cov: 12129 ft: 14983 corp: 16/330b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:08:50.481 [2024-07-15 12:25:45.586246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000041 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.481 [2024-07-15 12:25:45.586271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.481 #36 NEW cov: 12129 ft: 15004 corp: 17/338b lim: 35 exec/s: 36 rss: 73Mb L: 8/33 MS: 1 EraseBytes- 00:08:50.739 [2024-07-15 12:25:45.626621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.626647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.626701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.626715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.626769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001359 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.626783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.739 #37 NEW cov: 12129 ft: 15012 corp: 18/363b lim: 35 exec/s: 37 rss: 73Mb L: 25/33 MS: 1 ShuffleBytes- 00:08:50.739 [2024-07-15 12:25:45.676784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01240041 cdw11:e1002789 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.676810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.676866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.676881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.676934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.676949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.739 #38 NEW cov: 12129 ft: 15028 corp: 19/389b lim: 35 exec/s: 38 rss: 73Mb L: 26/33 MS: 1 InsertByte- 00:08:50.739 [2024-07-15 12:25:45.716977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.717002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.717056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.717070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.717122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:27890013 cdw11:1300e113 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.717137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.717191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.717204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.739 #39 NEW cov: 12129 ft: 15040 corp: 20/421b lim: 35 exec/s: 39 rss: 73Mb L: 32/33 MS: 1 CopyPart- 00:08:50.739 [2024-07-15 12:25:45.756965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01240041 cdw11:e1002789 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.756990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.757042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.757055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.757105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.757135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.739 #40 NEW cov: 12129 ft: 15060 corp: 21/447b lim: 35 exec/s: 40 rss: 73Mb L: 26/33 MS: 1 ChangeASCIIInt- 00:08:50.739 [2024-07-15 12:25:45.807112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.807137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.807190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.807205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.807260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.807275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.739 #41 NEW cov: 12129 ft: 15061 corp: 22/472b lim: 35 exec/s: 41 rss: 73Mb L: 25/33 MS: 1 ChangeBit- 00:08:50.739 [2024-07-15 12:25:45.846965] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:50.739 [2024-07-15 12:25:45.847360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:00008900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.847386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.847441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e1130000 cdw11:01001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.847458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.847511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.847525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.739 [2024-07-15 12:25:45.847583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13130013 cdw11:41001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.739 [2024-07-15 12:25:45.847597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.997 #42 NEW cov: 12138 ft: 15119 corp: 23/506b lim: 35 exec/s: 42 rss: 73Mb L: 34/34 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:50.997 [2024-07-15 12:25:45.887369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.997 [2024-07-15 12:25:45.887395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.997 [2024-07-15 12:25:45.887449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.997 [2024-07-15 12:25:45.887463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:45.887516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:45.887535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.998 #43 NEW cov: 12138 ft: 15136 corp: 24/531b lim: 35 exec/s: 43 rss: 73Mb L: 25/34 MS: 1 EraseBytes- 00:08:50.998 [2024-07-15 12:25:45.927333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff0029ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:45.927360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.998 #44 NEW cov: 12138 ft: 15157 corp: 25/550b lim: 35 exec/s: 44 rss: 73Mb L: 19/34 MS: 1 ChangeByte- 00:08:50.998 [2024-07-15 12:25:45.977599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:45.977625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:45.977680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:45.977697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:45.977752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:45.977767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.998 #45 NEW cov: 12138 ft: 15166 corp: 26/575b lim: 35 exec/s: 45 rss: 73Mb L: 25/34 MS: 1 ChangeASCIIInt- 00:08:50.998 [2024-07-15 12:25:46.027760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.027786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:46.027840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.027854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:46.027906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001359 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.027936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.998 #46 NEW cov: 12138 ft: 15167 corp: 27/600b lim: 35 exec/s: 46 rss: 73Mb L: 25/34 MS: 1 ChangeBit- 00:08:50.998 [2024-07-15 12:25:46.067900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.067926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:46.067982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.067996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:46.068048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001359 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.068063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.998 #47 NEW cov: 12138 ft: 15172 corp: 28/625b lim: 35 exec/s: 47 rss: 73Mb L: 25/34 MS: 1 ChangeASCIIInt- 00:08:50.998 [2024-07-15 12:25:46.117805] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:50.998 [2024-07-15 12:25:46.118022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff0029ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.118049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.998 [2024-07-15 12:25:46.118163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:50.998 [2024-07-15 12:25:46.118182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.257 #48 NEW cov: 12138 ft: 15192 corp: 29/650b lim: 35 exec/s: 48 rss: 74Mb L: 25/34 MS: 1 InsertRepeatedBytes- 00:08:51.257 [2024-07-15 12:25:46.168378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01130041 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.168405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.168464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:27001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.168479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.168533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:131b00e1 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.168548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.168600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.168613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.168666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:13e10013 cdw11:f800da34 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.168681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:51.257 #54 NEW cov: 12138 ft: 15255 corp: 30/685b lim: 35 exec/s: 54 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:08:51.257 [2024-07-15 12:25:46.208376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.208403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.208459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:130013b6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.208474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.208540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.208555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.208610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13410013 cdw11:89000127 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.208623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.257 #55 NEW cov: 12138 ft: 15262 corp: 31/716b lim: 35 exec/s: 55 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:08:51.257 [2024-07-15 12:25:46.248721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.248747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.248803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:011f0013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.248818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.248870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.248885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.248938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:41010013 cdw11:e1002789 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.248955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.257 #56 NEW cov: 12138 ft: 15326 corp: 32/746b lim: 35 exec/s: 56 rss: 74Mb L: 30/35 MS: 1 ShuffleBytes- 00:08:51.257 [2024-07-15 12:25:46.288227] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:51.257 [2024-07-15 12:25:46.288634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:00008900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.288659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.257 [2024-07-15 12:25:46.288717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e1130000 cdw11:01001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.257 [2024-07-15 12:25:46.288734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.258 [2024-07-15 12:25:46.288787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.258 [2024-07-15 12:25:46.288802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.258 [2024-07-15 12:25:46.288854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13130013 cdw11:89001327 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.258 [2024-07-15 12:25:46.288869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.258 #57 NEW cov: 12138 ft: 15394 corp: 33/777b lim: 35 exec/s: 57 rss: 74Mb L: 31/35 MS: 1 EraseBytes- 00:08:51.258 [2024-07-15 12:25:46.338474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff0010ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.258 [2024-07-15 12:25:46.338500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.258 [2024-07-15 12:25:46.338557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:89e10027 cdw11:3400e1da SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.258 [2024-07-15 12:25:46.338573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.258 #58 NEW cov: 12138 ft: 15402 corp: 34/793b lim: 35 exec/s: 58 rss: 74Mb L: 16/35 MS: 1 ChangeByte- 00:08:51.517 [2024-07-15 12:25:46.388765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.388793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.388849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.388864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.388918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:131300e8 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.388934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.517 #59 NEW cov: 12138 ft: 15420 corp: 35/819b lim: 35 exec/s: 59 rss: 74Mb L: 26/35 MS: 1 InsertByte- 00:08:51.517 [2024-07-15 12:25:46.438890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:b1b100b1 cdw11:b100b1b1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.438922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.438976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b1b100b1 cdw11:b100b1b1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.438991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.439045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:b1b100b1 cdw11:b100b1b1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.439059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.517 #61 NEW cov: 12138 ft: 15427 corp: 36/846b lim: 35 exec/s: 61 rss: 74Mb L: 27/35 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:51.517 [2024-07-15 12:25:46.478896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff01000a cdw11:ff001fff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.478921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.478975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:89e10027 cdw11:3400e1da SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.478989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.517 #62 NEW cov: 12138 ft: 15455 corp: 37/862b lim: 35 exec/s: 62 rss: 74Mb L: 16/35 MS: 1 PersAutoDict- DE: "\001\037"- 00:08:51.517 [2024-07-15 12:25:46.519241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:130089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.519267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.519321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:011f0013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.519334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.519386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.519400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.519451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:13410013 cdw11:89000127 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.519464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.517 #63 NEW cov: 12138 ft: 15513 corp: 38/893b lim: 35 exec/s: 63 rss: 74Mb L: 31/35 MS: 1 CopyPart- 00:08:51.517 [2024-07-15 12:25:46.559212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:01270041 cdw11:1b0089e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.559238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.559288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:13130013 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.559301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.517 [2024-07-15 12:25:46.559367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:131300e8 cdw11:13001313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.517 [2024-07-15 12:25:46.559385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.517 #64 pulse cov: 12138 ft: 15529 corp: 38/893b lim: 35 exec/s: 32 rss: 74Mb 00:08:51.517 #64 NEW cov: 12138 ft: 15529 corp: 39/919b lim: 35 exec/s: 32 rss: 74Mb L: 26/35 MS: 1 ChangeBinInt- 00:08:51.517 #64 DONE cov: 12138 ft: 15529 corp: 39/919b lim: 35 exec/s: 32 rss: 74Mb 00:08:51.517 ###### Recommended dictionary. ###### 00:08:51.517 "\001'\211\341\341\3324\370" # Uses: 2 00:08:51.517 "\001\037" # Uses: 1 00:08:51.517 "\000\000\000\000" # Uses: 0 00:08:51.517 ###### End of recommended dictionary. ###### 00:08:51.517 Done 64 runs in 2 second(s) 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:51.776 12:25:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:51.776 [2024-07-15 12:25:46.774659] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:51.776 [2024-07-15 12:25:46.774740] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159307 ] 00:08:51.776 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.035 [2024-07-15 12:25:46.977680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.035 [2024-07-15 12:25:47.050831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.035 [2024-07-15 12:25:47.110293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.035 [2024-07-15 12:25:47.126504] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:52.035 INFO: Running with entropic power schedule (0xFF, 100). 00:08:52.035 INFO: Seed: 1647050172 00:08:52.035 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:08:52.035 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:08:52.035 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:52.035 INFO: A corpus is not provided, starting from an empty corpus 00:08:52.035 #2 INITED exec/s: 0 rss: 65Mb 00:08:52.035 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:52.035 This may also happen if the target rejected all inputs we tried so far 00:08:52.553 NEW_FUNC[1/684]: 0x488f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:52.553 NEW_FUNC[2/684]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:52.553 #3 NEW cov: 11808 ft: 11808 corp: 2/17b lim: 20 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:52.553 #5 NEW cov: 11938 ft: 12763 corp: 3/22b lim: 20 exec/s: 0 rss: 72Mb L: 5/16 MS: 2 ChangeBit-CMP- DE: "\377\377\377\377"- 00:08:52.553 #6 NEW cov: 11944 ft: 13018 corp: 4/38b lim: 20 exec/s: 0 rss: 72Mb L: 16/16 MS: 1 ChangeBinInt- 00:08:52.553 #7 NEW cov: 12029 ft: 13315 corp: 5/55b lim: 20 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 InsertByte- 00:08:52.553 #8 NEW cov: 12029 ft: 13410 corp: 6/74b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:08:52.811 #9 NEW cov: 12034 ft: 13766 corp: 7/84b lim: 20 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 EraseBytes- 00:08:52.811 #11 NEW cov: 12034 ft: 13909 corp: 8/89b lim: 20 exec/s: 0 rss: 72Mb L: 5/19 MS: 2 CrossOver-CrossOver- 00:08:52.811 #12 NEW cov: 12034 ft: 13931 corp: 9/108b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:08:52.811 #13 NEW cov: 12034 ft: 13987 corp: 10/113b lim: 20 exec/s: 0 rss: 72Mb L: 5/19 MS: 1 ShuffleBytes- 00:08:52.811 #14 NEW cov: 12034 ft: 14030 corp: 11/132b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBit- 00:08:52.811 #15 NEW cov: 12034 ft: 14044 corp: 12/151b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBinInt- 00:08:53.070 #16 NEW cov: 12034 ft: 14052 corp: 13/170b lim: 20 exec/s: 0 rss: 72Mb L: 19/19 MS: 1 ChangeBit- 00:08:53.070 #17 NEW cov: 12034 ft: 14057 corp: 14/189b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:08:53.070 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:53.070 #18 NEW cov: 12057 ft: 14146 corp: 15/208b lim: 20 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CopyPart- 00:08:53.070 [2024-07-15 12:25:48.094439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.070 [2024-07-15 12:25:48.094483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.070 NEW_FUNC[1/17]: 0x11db1b0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3359 00:08:53.070 NEW_FUNC[2/17]: 0x11dbd30 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3301 00:08:53.070 #21 NEW cov: 12300 ft: 14421 corp: 16/217b lim: 20 exec/s: 0 rss: 73Mb L: 9/19 MS: 3 CrossOver-ShuffleBytes-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:53.070 #22 NEW cov: 12300 ft: 14454 corp: 17/223b lim: 20 exec/s: 22 rss: 73Mb L: 6/19 MS: 1 InsertByte- 00:08:53.328 #23 NEW cov: 12300 ft: 14467 corp: 18/241b lim: 20 exec/s: 23 rss: 73Mb L: 18/19 MS: 1 EraseBytes- 00:08:53.328 #24 NEW cov: 12300 ft: 14499 corp: 19/260b lim: 20 exec/s: 24 rss: 73Mb L: 19/19 MS: 1 ChangeByte- 00:08:53.328 #25 NEW cov: 12300 ft: 14563 corp: 20/280b lim: 20 exec/s: 25 rss: 73Mb L: 20/20 MS: 1 InsertByte- 00:08:53.328 #31 NEW cov: 12300 ft: 14565 corp: 21/299b lim: 20 exec/s: 31 rss: 73Mb L: 19/20 MS: 1 ShuffleBytes- 00:08:53.328 #32 NEW cov: 12300 ft: 14604 corp: 22/315b lim: 20 exec/s: 32 rss: 73Mb L: 16/20 MS: 1 CrossOver- 00:08:53.328 NEW_FUNC[1/2]: 0x1341c10 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:08:53.328 NEW_FUNC[2/2]: 0x1363840 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3531 00:08:53.328 #33 NEW cov: 12355 ft: 14721 corp: 23/335b lim: 20 exec/s: 33 rss: 73Mb L: 20/20 MS: 1 CrossOver- 00:08:53.587 #34 NEW cov: 12359 ft: 14834 corp: 24/347b lim: 20 exec/s: 34 rss: 73Mb L: 12/20 MS: 1 EraseBytes- 00:08:53.587 #35 NEW cov: 12359 ft: 14858 corp: 25/366b lim: 20 exec/s: 35 rss: 73Mb L: 19/20 MS: 1 ChangeByte- 00:08:53.587 #36 NEW cov: 12359 ft: 14909 corp: 26/381b lim: 20 exec/s: 36 rss: 73Mb L: 15/20 MS: 1 EraseBytes- 00:08:53.587 #37 NEW cov: 12359 ft: 14950 corp: 27/400b lim: 20 exec/s: 37 rss: 73Mb L: 19/20 MS: 1 ChangeBit- 00:08:53.587 #38 NEW cov: 12359 ft: 15024 corp: 28/420b lim: 20 exec/s: 38 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:08:53.587 [2024-07-15 12:25:48.666376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.587 [2024-07-15 12:25:48.666417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.587 #39 NEW cov: 12359 ft: 15078 corp: 29/439b lim: 20 exec/s: 39 rss: 73Mb L: 19/20 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:53.846 #40 NEW cov: 12359 ft: 15082 corp: 30/459b lim: 20 exec/s: 40 rss: 73Mb L: 20/20 MS: 1 InsertByte- 00:08:53.846 #41 NEW cov: 12359 ft: 15174 corp: 31/473b lim: 20 exec/s: 41 rss: 73Mb L: 14/20 MS: 1 EraseBytes- 00:08:53.846 #42 NEW cov: 12359 ft: 15195 corp: 32/485b lim: 20 exec/s: 42 rss: 73Mb L: 12/20 MS: 1 ChangeBit- 00:08:53.846 #43 NEW cov: 12359 ft: 15200 corp: 33/498b lim: 20 exec/s: 43 rss: 73Mb L: 13/20 MS: 1 EraseBytes- 00:08:53.846 #44 NEW cov: 12359 ft: 15201 corp: 34/508b lim: 20 exec/s: 44 rss: 73Mb L: 10/20 MS: 1 InsertRepeatedBytes- 00:08:53.846 #45 NEW cov: 12359 ft: 15212 corp: 35/527b lim: 20 exec/s: 45 rss: 73Mb L: 19/20 MS: 1 ChangeBit- 00:08:54.105 #46 NEW cov: 12359 ft: 15216 corp: 36/535b lim: 20 exec/s: 46 rss: 73Mb L: 8/20 MS: 1 CrossOver- 00:08:54.105 #47 NEW cov: 12359 ft: 15220 corp: 37/550b lim: 20 exec/s: 47 rss: 73Mb L: 15/20 MS: 1 CrossOver- 00:08:54.105 #48 NEW cov: 12359 ft: 15239 corp: 38/570b lim: 20 exec/s: 48 rss: 73Mb L: 20/20 MS: 1 ChangeByte- 00:08:54.105 #50 NEW cov: 12359 ft: 15252 corp: 39/581b lim: 20 exec/s: 50 rss: 74Mb L: 11/20 MS: 2 CrossOver-CMP- DE: "\377\377\377\377\377\377\002\375"- 00:08:54.105 #51 NEW cov: 12359 ft: 15262 corp: 40/599b lim: 20 exec/s: 25 rss: 74Mb L: 18/20 MS: 1 InsertRepeatedBytes- 00:08:54.106 #51 DONE cov: 12359 ft: 15262 corp: 40/599b lim: 20 exec/s: 25 rss: 74Mb 00:08:54.106 ###### Recommended dictionary. ###### 00:08:54.106 "\377\377\377\377" # Uses: 1 00:08:54.106 "\001\000\000\000\000\000\000\000" # Uses: 1 00:08:54.106 "\377\377\377\377\377\377\002\375" # Uses: 0 00:08:54.106 ###### End of recommended dictionary. ###### 00:08:54.106 Done 51 runs in 2 second(s) 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:54.365 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:54.366 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:54.366 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:54.366 12:25:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:54.366 [2024-07-15 12:25:49.363147] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:54.366 [2024-07-15 12:25:49.363220] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159622 ] 00:08:54.366 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.625 [2024-07-15 12:25:49.563189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.625 [2024-07-15 12:25:49.635290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.625 [2024-07-15 12:25:49.695505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.625 [2024-07-15 12:25:49.711717] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:54.625 INFO: Running with entropic power schedule (0xFF, 100). 00:08:54.625 INFO: Seed: 4232058584 00:08:54.625 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:08:54.625 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:08:54.625 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:54.625 INFO: A corpus is not provided, starting from an empty corpus 00:08:54.625 #2 INITED exec/s: 0 rss: 65Mb 00:08:54.625 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:54.625 This may also happen if the target rejected all inputs we tried so far 00:08:54.883 [2024-07-15 12:25:49.767479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.883 [2024-07-15 12:25:49.767509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.883 [2024-07-15 12:25:49.767566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.883 [2024-07-15 12:25:49.767582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.883 [2024-07-15 12:25:49.767637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.883 [2024-07-15 12:25:49.767650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.142 NEW_FUNC[1/696]: 0x489ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:55.142 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:55.142 #16 NEW cov: 11902 ft: 11902 corp: 2/23b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 4 CrossOver-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:08:55.142 [2024-07-15 12:25:50.118337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.118382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.118463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.118477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.118536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.118549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.142 #17 NEW cov: 12036 ft: 12509 corp: 3/45b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 ChangeByte- 00:08:55.142 [2024-07-15 12:25:50.168372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.168397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.168468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:feff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.168482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.168542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.168569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.142 #18 NEW cov: 12042 ft: 12789 corp: 4/67b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 ChangeBinInt- 00:08:55.142 [2024-07-15 12:25:50.208460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.208488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.208543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.208557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.208609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.208623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.142 #24 NEW cov: 12127 ft: 13030 corp: 5/89b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 CMP- DE: "\002\000"- 00:08:55.142 [2024-07-15 12:25:50.258841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.258865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.258937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.258951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.259002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.259016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.142 [2024-07-15 12:25:50.259068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.142 [2024-07-15 12:25:50.259082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.401 #27 NEW cov: 12127 ft: 13433 corp: 6/122b lim: 35 exec/s: 0 rss: 72Mb L: 33/33 MS: 3 CMP-ChangeBit-InsertRepeatedBytes- DE: "\377\001\000\000"- 00:08:55.401 [2024-07-15 12:25:50.298404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2fd60ae4 cdw11:b9e40001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.298429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.401 #28 NEW cov: 12127 ft: 14451 corp: 7/131b lim: 35 exec/s: 0 rss: 72Mb L: 9/33 MS: 1 CMP- DE: "\344/\326\271\344\211'\000"- 00:08:55.401 [2024-07-15 12:25:50.339149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.339174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.339245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.339259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.339312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.339326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.339379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.339392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.339445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.339459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:55.401 #29 NEW cov: 12127 ft: 14553 corp: 8/166b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:08:55.401 [2024-07-15 12:25:50.389023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02330002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.389047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.389102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008d02 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.389117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.389185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.389198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.401 #30 NEW cov: 12127 ft: 14604 corp: 9/188b lim: 35 exec/s: 0 rss: 72Mb L: 22/35 MS: 1 CMP- DE: "\000\000\000\000\0023y\215"- 00:08:55.401 [2024-07-15 12:25:50.429257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.429282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.429353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.429370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.429423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.429438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.429491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.429505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.401 #31 NEW cov: 12127 ft: 14701 corp: 10/222b lim: 35 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 EraseBytes- 00:08:55.401 [2024-07-15 12:25:50.479282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.479307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.479364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.479378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.479430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.479443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.401 #32 NEW cov: 12127 ft: 14755 corp: 11/244b lim: 35 exec/s: 0 rss: 72Mb L: 22/35 MS: 1 ShuffleBytes- 00:08:55.401 [2024-07-15 12:25:50.519305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.519331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.519403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.519417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.401 [2024-07-15 12:25:50.519469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000fffb cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.401 [2024-07-15 12:25:50.519483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.660 #33 NEW cov: 12127 ft: 14794 corp: 12/266b lim: 35 exec/s: 0 rss: 72Mb L: 22/35 MS: 1 ChangeBinInt- 00:08:55.660 [2024-07-15 12:25:50.559612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.559638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.660 [2024-07-15 12:25:50.559695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.559708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.660 [2024-07-15 12:25:50.559762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.559778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.660 [2024-07-15 12:25:50.559829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.559843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.660 #34 NEW cov: 12127 ft: 14819 corp: 13/300b lim: 35 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 ShuffleBytes- 00:08:55.660 [2024-07-15 12:25:50.609755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.609780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.660 [2024-07-15 12:25:50.609837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.609851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.660 [2024-07-15 12:25:50.609904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.609917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.660 [2024-07-15 12:25:50.609969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.609983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.660 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:55.660 #35 NEW cov: 12150 ft: 14878 corp: 14/334b lim: 35 exec/s: 0 rss: 72Mb L: 34/35 MS: 1 PersAutoDict- DE: "\002\000"- 00:08:55.660 [2024-07-15 12:25:50.659735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.660 [2024-07-15 12:25:50.659761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.659814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.659827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.659899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.659913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.661 #36 NEW cov: 12150 ft: 14891 corp: 15/356b lim: 35 exec/s: 0 rss: 72Mb L: 22/35 MS: 1 ChangeBinInt- 00:08:55.661 [2024-07-15 12:25:50.700001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.700026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.700082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2fd602e4 cdw11:b9e40001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.700096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.700149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00002700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.700163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.700214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.700227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.661 #37 NEW cov: 12150 ft: 14956 corp: 16/390b lim: 35 exec/s: 0 rss: 73Mb L: 34/35 MS: 1 PersAutoDict- DE: "\344/\326\271\344\211'\000"- 00:08:55.661 [2024-07-15 12:25:50.750005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02330002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.750030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.750086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008d02 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.750099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.661 [2024-07-15 12:25:50.750154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.661 [2024-07-15 12:25:50.750167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.661 #38 NEW cov: 12150 ft: 14969 corp: 17/415b lim: 35 exec/s: 38 rss: 73Mb L: 25/35 MS: 1 CrossOver- 00:08:55.920 [2024-07-15 12:25:50.800468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.800494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.800556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.800570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.800624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.800638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.800690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:02000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.800703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.800756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.800769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:55.920 #39 NEW cov: 12150 ft: 15008 corp: 18/450b lim: 35 exec/s: 39 rss: 73Mb L: 35/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:55.920 [2024-07-15 12:25:50.840533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.840558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.840616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.840630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.840682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.840695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.840748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00800002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.840762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.920 #40 NEW cov: 12150 ft: 15062 corp: 19/484b lim: 35 exec/s: 40 rss: 73Mb L: 34/35 MS: 1 ChangeBit- 00:08:55.920 [2024-07-15 12:25:50.880722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.880746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.880800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.880814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.880869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.880882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.880935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.880948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.920 #41 NEW cov: 12150 ft: 15072 corp: 20/518b lim: 35 exec/s: 41 rss: 73Mb L: 34/35 MS: 1 ShuffleBytes- 00:08:55.920 [2024-07-15 12:25:50.920997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.921021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.921094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.921109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.921165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.921179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.921235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.921249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:50.921303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.921319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:55.920 #42 NEW cov: 12150 ft: 15112 corp: 21/553b lim: 35 exec/s: 42 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:55.920 [2024-07-15 12:25:50.970486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:50.970510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.920 #47 NEW cov: 12150 ft: 15185 corp: 22/563b lim: 35 exec/s: 47 rss: 73Mb L: 10/35 MS: 5 CrossOver-ChangeBit-EraseBytes-EraseBytes-InsertRepeatedBytes- 00:08:55.920 [2024-07-15 12:25:51.020908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:08000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:51.020934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:51.020991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:51.021005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.920 [2024-07-15 12:25:51.021060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.920 [2024-07-15 12:25:51.021074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.180 #48 NEW cov: 12150 ft: 15199 corp: 23/585b lim: 35 exec/s: 48 rss: 73Mb L: 22/35 MS: 1 ChangeBit- 00:08:56.180 [2024-07-15 12:25:51.071249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.071274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.071347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.071361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.071414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.071428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.071479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.071493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.180 #49 NEW cov: 12150 ft: 15214 corp: 24/618b lim: 35 exec/s: 49 rss: 73Mb L: 33/35 MS: 1 ShuffleBytes- 00:08:56.180 [2024-07-15 12:25:51.120897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2f000ae4 cdw11:09e40001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.120922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.180 #50 NEW cov: 12150 ft: 15232 corp: 25/627b lim: 35 exec/s: 50 rss: 73Mb L: 9/35 MS: 1 ChangeBinInt- 00:08:56.180 [2024-07-15 12:25:51.171016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.171043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.180 #51 NEW cov: 12150 ft: 15248 corp: 26/637b lim: 35 exec/s: 51 rss: 73Mb L: 10/35 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:08:56.180 [2024-07-15 12:25:51.221642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.221666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.221738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.221752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.221808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:29292929 cdw11:29290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.221823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.221875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fffb2900 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.221889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.180 #52 NEW cov: 12150 ft: 15257 corp: 27/668b lim: 35 exec/s: 52 rss: 73Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:08:56.180 [2024-07-15 12:25:51.271487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:29290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.271512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.180 [2024-07-15 12:25:51.271574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:29002929 cdw11:fffb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.180 [2024-07-15 12:25:51.271589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.180 #53 NEW cov: 12150 ft: 15469 corp: 28/687b lim: 35 exec/s: 53 rss: 73Mb L: 19/35 MS: 1 EraseBytes- 00:08:56.439 [2024-07-15 12:25:51.321947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.321971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.439 [2024-07-15 12:25:51.322026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.322040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.439 [2024-07-15 12:25:51.322093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.322107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.439 [2024-07-15 12:25:51.322160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.322173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.439 #54 NEW cov: 12150 ft: 15475 corp: 29/720b lim: 35 exec/s: 54 rss: 74Mb L: 33/35 MS: 1 ChangeBinInt- 00:08:56.439 [2024-07-15 12:25:51.371949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.371974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.439 [2024-07-15 12:25:51.372046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.372060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.439 [2024-07-15 12:25:51.372113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000fffb cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.439 [2024-07-15 12:25:51.372127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.440 #55 NEW cov: 12150 ft: 15486 corp: 30/742b lim: 35 exec/s: 55 rss: 74Mb L: 22/35 MS: 1 ShuffleBytes- 00:08:56.440 [2024-07-15 12:25:51.412193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.412216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.412272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2fd602e4 cdw11:b9e40001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.412286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.412337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00002700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.412351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.412404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.412417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.440 #56 NEW cov: 12150 ft: 15530 corp: 31/776b lim: 35 exec/s: 56 rss: 74Mb L: 34/35 MS: 1 ChangeBit- 00:08:56.440 [2024-07-15 12:25:51.462293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.462316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.462371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.462384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.462453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:29292929 cdw11:29290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.462466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.462522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff2929 cdw11:fb000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.462540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.440 #57 NEW cov: 12150 ft: 15546 corp: 32/808b lim: 35 exec/s: 57 rss: 74Mb L: 32/35 MS: 1 InsertByte- 00:08:56.440 [2024-07-15 12:25:51.502294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.502321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.502393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.502408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.502462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0000fffb cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.502475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.440 #58 NEW cov: 12150 ft: 15603 corp: 33/830b lim: 35 exec/s: 58 rss: 74Mb L: 22/35 MS: 1 ShuffleBytes- 00:08:56.440 [2024-07-15 12:25:51.552394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.552421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.552474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00020002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.552488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.440 [2024-07-15 12:25:51.552544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffb0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.440 [2024-07-15 12:25:51.552573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.699 #59 NEW cov: 12150 ft: 15615 corp: 34/854b lim: 35 exec/s: 59 rss: 74Mb L: 24/35 MS: 1 PersAutoDict- DE: "\002\000"- 00:08:56.699 [2024-07-15 12:25:51.592680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02330002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.699 [2024-07-15 12:25:51.592704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.699 [2024-07-15 12:25:51.592777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008d02 cdw11:00d60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.699 [2024-07-15 12:25:51.592791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.699 [2024-07-15 12:25:51.592846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.592859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.592912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.592926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.700 #60 NEW cov: 12150 ft: 15629 corp: 35/885b lim: 35 exec/s: 60 rss: 74Mb L: 31/35 MS: 1 InsertRepeatedBytes- 00:08:56.700 [2024-07-15 12:25:51.642858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.642882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.642958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00270002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.642973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.643026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:29292929 cdw11:29290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.643040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.643091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff2929 cdw11:fb000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.643105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.700 #61 NEW cov: 12150 ft: 15656 corp: 36/917b lim: 35 exec/s: 61 rss: 74Mb L: 32/35 MS: 1 InsertByte- 00:08:56.700 [2024-07-15 12:25:51.682992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.683016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.683091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.683106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.683159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:29292929 cdw11:29290000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.683172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.683224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00ff2929 cdw11:fb000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.683237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.700 #62 NEW cov: 12150 ft: 15667 corp: 37/949b lim: 35 exec/s: 62 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:08:56.700 [2024-07-15 12:25:51.733078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02330002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.733102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.733171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008d02 cdw11:00d60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.733186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.733237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:d6d6d6d6 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.733251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.700 [2024-07-15 12:25:51.733302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f7ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.700 [2024-07-15 12:25:51.733316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.700 #63 NEW cov: 12150 ft: 15686 corp: 38/980b lim: 35 exec/s: 31 rss: 74Mb L: 31/35 MS: 1 ChangeBinInt- 00:08:56.700 #63 DONE cov: 12150 ft: 15686 corp: 38/980b lim: 35 exec/s: 31 rss: 74Mb 00:08:56.700 ###### Recommended dictionary. ###### 00:08:56.700 "\002\000" # Uses: 2 00:08:56.700 "\377\001\000\000" # Uses: 1 00:08:56.700 "\344/\326\271\344\211'\000" # Uses: 1 00:08:56.700 "\000\000\000\000\0023y\215" # Uses: 0 00:08:56.700 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:56.700 ###### End of recommended dictionary. ###### 00:08:56.700 Done 63 runs in 2 second(s) 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:56.959 12:25:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:56.959 [2024-07-15 12:25:51.938350] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:56.959 [2024-07-15 12:25:51.938407] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159913 ] 00:08:56.959 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.218 [2024-07-15 12:25:52.136857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.218 [2024-07-15 12:25:52.209194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.218 [2024-07-15 12:25:52.269013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.218 [2024-07-15 12:25:52.285213] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:57.218 INFO: Running with entropic power schedule (0xFF, 100). 00:08:57.218 INFO: Seed: 2509082239 00:08:57.218 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:08:57.218 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:08:57.218 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:57.218 INFO: A corpus is not provided, starting from an empty corpus 00:08:57.218 #2 INITED exec/s: 0 rss: 66Mb 00:08:57.218 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:57.218 This may also happen if the target rejected all inputs we tried so far 00:08:57.218 [2024-07-15 12:25:52.334047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a00 cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.218 [2024-07-15 12:25:52.334077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.737 NEW_FUNC[1/696]: 0x48c180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:57.737 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:57.737 #16 NEW cov: 11917 ft: 11897 corp: 2/10b lim: 45 exec/s: 0 rss: 73Mb L: 9/9 MS: 4 ShuffleBytes-CrossOver-ChangeBit-CMP- DE: "\000'\211\345\320\327U8"- 00:08:57.737 [2024-07-15 12:25:52.675356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.675397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.737 [2024-07-15 12:25:52.675453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.675468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.737 [2024-07-15 12:25:52.675521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.675539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.737 [2024-07-15 12:25:52.675592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.675606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.737 #19 NEW cov: 12047 ft: 13363 corp: 3/49b lim: 45 exec/s: 0 rss: 73Mb L: 39/39 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:08:57.737 [2024-07-15 12:25:52.724917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.724943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.737 #20 NEW cov: 12053 ft: 13639 corp: 4/58b lim: 45 exec/s: 0 rss: 73Mb L: 9/39 MS: 1 PersAutoDict- DE: "\000'\211\345\320\327U8"- 00:08:57.737 [2024-07-15 12:25:52.765118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.765143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.737 [2024-07-15 12:25:52.765199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.765213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.737 #22 NEW cov: 12138 ft: 14182 corp: 5/81b lim: 45 exec/s: 0 rss: 73Mb L: 23/39 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:57.737 [2024-07-15 12:25:52.815310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.815335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.737 [2024-07-15 12:25:52.815391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:215f5f5f cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.737 [2024-07-15 12:25:52.815408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.737 #23 NEW cov: 12138 ft: 14251 corp: 6/104b lim: 45 exec/s: 0 rss: 73Mb L: 23/39 MS: 1 ChangeByte- 00:08:57.737 [2024-07-15 12:25:52.865425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a00 cdw11:e5d00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:52.865450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.996 [2024-07-15 12:25:52.865505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d0d789e5 cdw11:55d70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:52.865520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.996 #24 NEW cov: 12138 ft: 14315 corp: 7/122b lim: 45 exec/s: 0 rss: 74Mb L: 18/39 MS: 1 CrossOver- 00:08:57.996 [2024-07-15 12:25:52.915395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a0a cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:52.915418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.996 #25 NEW cov: 12138 ft: 14377 corp: 8/131b lim: 45 exec/s: 0 rss: 74Mb L: 9/39 MS: 1 CrossOver- 00:08:57.996 [2024-07-15 12:25:52.965568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:000ae589 cdw11:d0270006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:52.965594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.996 #26 NEW cov: 12138 ft: 14469 corp: 9/140b lim: 45 exec/s: 0 rss: 74Mb L: 9/39 MS: 1 ShuffleBytes- 00:08:57.996 [2024-07-15 12:25:53.005690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:53.005715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.996 #27 NEW cov: 12138 ft: 14489 corp: 10/157b lim: 45 exec/s: 0 rss: 74Mb L: 17/39 MS: 1 PersAutoDict- DE: "\000'\211\345\320\327U8"- 00:08:57.996 [2024-07-15 12:25:53.045752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a0a cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:53.045776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.996 #28 NEW cov: 12138 ft: 14547 corp: 11/166b lim: 45 exec/s: 0 rss: 74Mb L: 9/39 MS: 1 ChangeBit- 00:08:57.996 [2024-07-15 12:25:53.096052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e50a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:53.096076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.996 [2024-07-15 12:25:53.096150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d7d0e5d0 cdw11:55d70002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.996 [2024-07-15 12:25:53.096165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.996 #29 NEW cov: 12138 ft: 14580 corp: 12/184b lim: 45 exec/s: 0 rss: 74Mb L: 18/39 MS: 1 CrossOver- 00:08:58.253 [2024-07-15 12:25:53.136152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:005f000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.253 [2024-07-15 12:25:53.136176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.253 [2024-07-15 12:25:53.136231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.136247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.254 #30 NEW cov: 12138 ft: 14608 corp: 13/203b lim: 45 exec/s: 0 rss: 74Mb L: 19/39 MS: 1 EraseBytes- 00:08:58.254 [2024-07-15 12:25:53.176619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.176645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.254 [2024-07-15 12:25:53.176718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.176733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.254 [2024-07-15 12:25:53.176786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.176800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.254 [2024-07-15 12:25:53.176850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.176863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.254 #33 NEW cov: 12138 ft: 14663 corp: 14/240b lim: 45 exec/s: 0 rss: 74Mb L: 37/39 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:58.254 [2024-07-15 12:25:53.216230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c189e592 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.216256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.254 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:58.254 #34 NEW cov: 12161 ft: 14735 corp: 15/249b lim: 45 exec/s: 0 rss: 74Mb L: 9/39 MS: 1 CMP- DE: "\222\301\211\012\000\000\000\000"- 00:08:58.254 [2024-07-15 12:25:53.266387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a0a cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.266413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.254 #35 NEW cov: 12161 ft: 14779 corp: 16/266b lim: 45 exec/s: 0 rss: 74Mb L: 17/39 MS: 1 CMP- DE: "\001'\211\346P\0366\266"- 00:08:58.254 [2024-07-15 12:25:53.316494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c125e592 cdw11:890a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.316520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.254 #36 NEW cov: 12161 ft: 14786 corp: 17/276b lim: 45 exec/s: 36 rss: 74Mb L: 10/39 MS: 1 InsertByte- 00:08:58.254 [2024-07-15 12:25:53.366818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e50a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.366844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.254 [2024-07-15 12:25:53.366919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:501e89e6 cdw11:36b60002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.254 [2024-07-15 12:25:53.366933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.512 #37 NEW cov: 12161 ft: 14826 corp: 18/294b lim: 45 exec/s: 37 rss: 74Mb L: 18/39 MS: 1 PersAutoDict- DE: "\001'\211\346P\0366\266"- 00:08:58.512 [2024-07-15 12:25:53.416956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.416982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.512 [2024-07-15 12:25:53.417038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:21ff5f5f cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.417053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.512 #38 NEW cov: 12161 ft: 14861 corp: 19/317b lim: 45 exec/s: 38 rss: 74Mb L: 23/39 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:58.512 [2024-07-15 12:25:53.466933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a0a cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.466959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.512 #39 NEW cov: 12161 ft: 14927 corp: 20/334b lim: 45 exec/s: 39 rss: 74Mb L: 17/39 MS: 1 ChangeASCIIInt- 00:08:58.512 [2024-07-15 12:25:53.517264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:d01ae527 cdw11:89000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.517290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.512 [2024-07-15 12:25:53.517346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d0d789e5 cdw11:55d70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.517360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.512 #40 NEW cov: 12161 ft: 15007 corp: 21/352b lim: 45 exec/s: 40 rss: 75Mb L: 18/39 MS: 1 ShuffleBytes- 00:08:58.512 [2024-07-15 12:25:53.567204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c189e592 cdw11:0aff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.567229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.512 #41 NEW cov: 12161 ft: 15074 corp: 22/361b lim: 45 exec/s: 41 rss: 75Mb L: 9/39 MS: 1 ChangeByte- 00:08:58.512 [2024-07-15 12:25:53.607485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.607510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.512 [2024-07-15 12:25:53.607571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.512 [2024-07-15 12:25:53.607585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.512 #42 NEW cov: 12161 ft: 15087 corp: 23/384b lim: 45 exec/s: 42 rss: 75Mb L: 23/39 MS: 1 ChangeByte- 00:08:58.771 [2024-07-15 12:25:53.647421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:005f000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.647444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.771 #43 NEW cov: 12161 ft: 15107 corp: 24/395b lim: 45 exec/s: 43 rss: 75Mb L: 11/39 MS: 1 EraseBytes- 00:08:58.771 [2024-07-15 12:25:53.697603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:d0d70ae5 cdw11:01270004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.697630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.771 #44 NEW cov: 12161 ft: 15120 corp: 25/404b lim: 45 exec/s: 44 rss: 75Mb L: 9/39 MS: 1 CrossOver- 00:08:58.771 [2024-07-15 12:25:53.737863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a00 cdw11:e5000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.737889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.771 [2024-07-15 12:25:53.737943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d0d7e5d0 cdw11:55d70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.737957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.771 #45 NEW cov: 12161 ft: 15131 corp: 26/422b lim: 45 exec/s: 45 rss: 75Mb L: 18/39 MS: 1 ShuffleBytes- 00:08:58.771 [2024-07-15 12:25:53.777826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e50a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.777851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.771 #46 NEW cov: 12161 ft: 15140 corp: 27/436b lim: 45 exec/s: 46 rss: 75Mb L: 14/39 MS: 1 EraseBytes- 00:08:58.771 [2024-07-15 12:25:53.828172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.828196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.771 [2024-07-15 12:25:53.828253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.828267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.771 #47 NEW cov: 12161 ft: 15156 corp: 28/459b lim: 45 exec/s: 47 rss: 75Mb L: 23/39 MS: 1 ShuffleBytes- 00:08:58.771 [2024-07-15 12:25:53.868153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e50a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.771 [2024-07-15 12:25:53.868177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.030 #48 NEW cov: 12161 ft: 15168 corp: 29/473b lim: 45 exec/s: 48 rss: 75Mb L: 14/39 MS: 1 ChangeBinInt- 00:08:59.030 [2024-07-15 12:25:53.918269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27e51a0a cdw11:d0d70002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:53.918293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.030 #50 NEW cov: 12161 ft: 15185 corp: 30/489b lim: 45 exec/s: 50 rss: 75Mb L: 16/39 MS: 2 EraseBytes-PersAutoDict- DE: "\222\301\211\012\000\000\000\000"- 00:08:59.030 [2024-07-15 12:25:53.958893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a0a cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:53.958917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.030 [2024-07-15 12:25:53.958986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:501e89e6 cdw11:36b80005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:53.959000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.030 [2024-07-15 12:25:53.959052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:b8b8b8b8 cdw11:b8b80005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:53.959072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.030 [2024-07-15 12:25:53.959122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:b8b8b8b8 cdw11:b8b80005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:53.959136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.030 #51 NEW cov: 12161 ft: 15208 corp: 31/533b lim: 45 exec/s: 51 rss: 75Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:08:59.030 [2024-07-15 12:25:54.008486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a00 cdw11:e5d00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:54.008509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.030 #52 NEW cov: 12161 ft: 15233 corp: 32/542b lim: 45 exec/s: 52 rss: 75Mb L: 9/44 MS: 1 EraseBytes- 00:08:59.030 [2024-07-15 12:25:54.048818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a00 cdw11:e5000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:54.048842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.030 [2024-07-15 12:25:54.048911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d0d7e5d0 cdw11:55d70001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.030 [2024-07-15 12:25:54.048926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.030 #53 NEW cov: 12161 ft: 15240 corp: 33/560b lim: 45 exec/s: 53 rss: 75Mb L: 18/44 MS: 1 ChangeBit- 00:08:59.030 [2024-07-15 12:25:54.098829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2589e592 cdw11:0a000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.031 [2024-07-15 12:25:54.098853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.031 #54 NEW cov: 12161 ft: 15242 corp: 34/569b lim: 45 exec/s: 54 rss: 75Mb L: 9/44 MS: 1 ChangeByte- 00:08:59.031 [2024-07-15 12:25:54.139395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27891a0a cdw11:e5d00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.031 [2024-07-15 12:25:54.139418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.031 [2024-07-15 12:25:54.139488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:501e89e6 cdw11:36b80005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.031 [2024-07-15 12:25:54.139502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.031 [2024-07-15 12:25:54.139558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4a47b8b8 cdw11:47470005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.031 [2024-07-15 12:25:54.139572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.031 [2024-07-15 12:25:54.139636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:b8b8b8b8 cdw11:b8b80005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.031 [2024-07-15 12:25:54.139649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.290 #55 NEW cov: 12161 ft: 15255 corp: 35/613b lim: 45 exec/s: 55 rss: 75Mb L: 44/44 MS: 1 ChangeBinInt- 00:08:59.290 [2024-07-15 12:25:54.189039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2b270a0a cdw11:89e50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.290 [2024-07-15 12:25:54.189063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.290 #56 NEW cov: 12161 ft: 15284 corp: 36/623b lim: 45 exec/s: 56 rss: 75Mb L: 10/44 MS: 1 InsertByte- 00:08:59.290 [2024-07-15 12:25:54.229218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c1890a92 cdw11:0aff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.290 [2024-07-15 12:25:54.229242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.290 #57 NEW cov: 12161 ft: 15301 corp: 37/632b lim: 45 exec/s: 57 rss: 75Mb L: 9/44 MS: 1 CrossOver- 00:08:59.290 [2024-07-15 12:25:54.279328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:27890a00 cdw11:e50a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.290 [2024-07-15 12:25:54.279353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.290 #58 NEW cov: 12161 ft: 15318 corp: 38/646b lim: 45 exec/s: 58 rss: 75Mb L: 14/44 MS: 1 ChangeByte- 00:08:59.290 [2024-07-15 12:25:54.329477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:2b270a0a cdw11:89e50006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.290 [2024-07-15 12:25:54.329503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.290 #59 NEW cov: 12161 ft: 15328 corp: 39/656b lim: 45 exec/s: 29 rss: 75Mb L: 10/44 MS: 1 ChangeBit- 00:08:59.290 #59 DONE cov: 12161 ft: 15328 corp: 39/656b lim: 45 exec/s: 29 rss: 75Mb 00:08:59.290 ###### Recommended dictionary. ###### 00:08:59.290 "\000'\211\345\320\327U8" # Uses: 2 00:08:59.290 "\222\301\211\012\000\000\000\000" # Uses: 1 00:08:59.290 "\001'\211\346P\0366\266" # Uses: 1 00:08:59.290 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:59.290 ###### End of recommended dictionary. ###### 00:08:59.290 Done 59 runs in 2 second(s) 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:59.548 12:25:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:59.548 [2024-07-15 12:25:54.546678] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:08:59.548 [2024-07-15 12:25:54.546773] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160220 ] 00:08:59.548 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.807 [2024-07-15 12:25:54.753739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.807 [2024-07-15 12:25:54.827951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.807 [2024-07-15 12:25:54.887834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.807 [2024-07-15 12:25:54.904016] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:59.807 INFO: Running with entropic power schedule (0xFF, 100). 00:08:59.807 INFO: Seed: 834129031 00:09:00.066 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:00.066 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:00.066 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:00.066 INFO: A corpus is not provided, starting from an empty corpus 00:09:00.066 #2 INITED exec/s: 0 rss: 65Mb 00:09:00.066 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:00.066 This may also happen if the target rejected all inputs we tried so far 00:09:00.066 [2024-07-15 12:25:54.959719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:00.066 [2024-07-15 12:25:54.959748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.066 [2024-07-15 12:25:54.959814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.066 [2024-07-15 12:25:54.959828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.066 [2024-07-15 12:25:54.959877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.066 [2024-07-15 12:25:54.959891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.066 [2024-07-15 12:25:54.959940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.066 [2024-07-15 12:25:54.959953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.325 NEW_FUNC[1/694]: 0x48e990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:09:00.325 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:00.325 #3 NEW cov: 11834 ft: 11834 corp: 2/10b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:00.325 [2024-07-15 12:25:55.300436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.300476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.300532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.300546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.300596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.300609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.325 #5 NEW cov: 11964 ft: 12609 corp: 3/16b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 2 ChangeByte-CrossOver- 00:09:00.325 [2024-07-15 12:25:55.340458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.340483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.340556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.340570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.340631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.340644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.325 #6 NEW cov: 11970 ft: 12902 corp: 4/22b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 1 CopyPart- 00:09:00.325 [2024-07-15 12:25:55.390593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.390617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.390667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.390680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.390730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.390743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.325 #7 NEW cov: 12055 ft: 13166 corp: 5/28b lim: 10 exec/s: 0 rss: 72Mb L: 6/9 MS: 1 ChangeBinInt- 00:09:00.325 [2024-07-15 12:25:55.440876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.440901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.440969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.440983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.441033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.441046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.325 [2024-07-15 12:25:55.441097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.325 [2024-07-15 12:25:55.441109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.585 #8 NEW cov: 12055 ft: 13251 corp: 6/37b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:00.585 [2024-07-15 12:25:55.490937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.490961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.491029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.491042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.491093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.491109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.491158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009f9f cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.491172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.585 #9 NEW cov: 12055 ft: 13325 corp: 7/46b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:00.585 [2024-07-15 12:25:55.531072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.531095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.531146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.531159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.531210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.531223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.531274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.531286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.585 #10 NEW cov: 12055 ft: 13376 corp: 8/55b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CopyPart- 00:09:00.585 [2024-07-15 12:25:55.581207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000123 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.581232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.581283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.581298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.581365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.581379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.581428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.581442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.585 #11 NEW cov: 12055 ft: 13458 corp: 9/64b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeByte- 00:09:00.585 [2024-07-15 12:25:55.631396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.631422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.631491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.631506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.631560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.631578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.631630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001f00 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.631644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.585 #12 NEW cov: 12055 ft: 13500 corp: 10/72b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 CMP- DE: "\001\037"- 00:09:00.585 [2024-07-15 12:25:55.671471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.671496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.671547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000025 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.671577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.671628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.671642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.585 [2024-07-15 12:25:55.671691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001f00 cdw11:00000000 00:09:00.585 [2024-07-15 12:25:55.671704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.585 #13 NEW cov: 12055 ft: 13561 corp: 11/80b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 1 ChangeByte- 00:09:00.844 [2024-07-15 12:25:55.721675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000f00 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.721699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.721751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.721764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.721813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.721826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.721875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.721887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.844 #18 NEW cov: 12055 ft: 13574 corp: 12/88b lim: 10 exec/s: 0 rss: 72Mb L: 8/9 MS: 5 ChangeBit-ShuffleBytes-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:09:00.844 [2024-07-15 12:25:55.761758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.761782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.761848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.761862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.761912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.761928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.761977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.761991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.844 #19 NEW cov: 12055 ft: 13592 corp: 13/97b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:09:00.844 [2024-07-15 12:25:55.802011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.802036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.802087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.802100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.802153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.802166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.802216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009f12 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.802229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.802278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f9f cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.802291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.844 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:00.844 #20 NEW cov: 12078 ft: 13700 corp: 14/107b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:09:00.844 [2024-07-15 12:25:55.851882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.851905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.851973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.851987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.852038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.852051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.844 #21 NEW cov: 12078 ft: 13717 corp: 15/114b lim: 10 exec/s: 0 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:09:00.844 [2024-07-15 12:25:55.902259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.902285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.902337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000025 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.902351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.902402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.902418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.902468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.902480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.902534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00001f00 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.902548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.844 #22 NEW cov: 12078 ft: 13738 corp: 16/124b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:09:00.844 [2024-07-15 12:25:55.952298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.952325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.952376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f00 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.952390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.952440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.952453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.844 [2024-07-15 12:25:55.952503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:00.844 [2024-07-15 12:25:55.952516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.103 #23 NEW cov: 12078 ft: 13741 corp: 17/133b lim: 10 exec/s: 23 rss: 73Mb L: 9/10 MS: 1 PersAutoDict- DE: "\001\037"- 00:09:01.103 [2024-07-15 12:25:55.992543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000019f cdw11:00000000 00:09:01.103 [2024-07-15 12:25:55.992570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:55.992639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000023ff cdw11:00000000 00:09:01.104 [2024-07-15 12:25:55.992654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:55.992717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:01.104 [2024-07-15 12:25:55.992730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:55.992781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:55.992794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:55.992844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:55.992857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.104 #24 NEW cov: 12078 ft: 13784 corp: 18/143b lim: 10 exec/s: 24 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:09:01.104 [2024-07-15 12:25:56.042609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000123 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.042634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.042688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.042701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.042754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.042767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.042818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000074 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.042831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.104 #25 NEW cov: 12078 ft: 13821 corp: 19/152b lim: 10 exec/s: 25 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:09:01.104 [2024-07-15 12:25:56.082394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000f01 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.082418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.082483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.082497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.104 #26 NEW cov: 12078 ft: 13995 corp: 20/156b lim: 10 exec/s: 26 rss: 73Mb L: 4/10 MS: 1 CrossOver- 00:09:01.104 [2024-07-15 12:25:56.132942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000019f cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.132967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.133034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000023ff cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.133048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.133096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.133110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.133158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000900 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.133171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.133223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.133236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.104 #27 NEW cov: 12078 ft: 14022 corp: 21/166b lim: 10 exec/s: 27 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:09:01.104 [2024-07-15 12:25:56.182960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.182985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.183051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.183065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.183118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.183133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.183185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009fd7 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.183198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.104 #28 NEW cov: 12078 ft: 14041 corp: 22/175b lim: 10 exec/s: 28 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:09:01.104 [2024-07-15 12:25:56.223034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.223060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.223110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f40 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.223124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.223173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.223187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.104 [2024-07-15 12:25:56.223238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.104 [2024-07-15 12:25:56.223251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.363 #29 NEW cov: 12078 ft: 14058 corp: 23/184b lim: 10 exec/s: 29 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:09:01.363 [2024-07-15 12:25:56.273306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000f34 cdw11:00000000 00:09:01.363 [2024-07-15 12:25:56.273330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.363 [2024-07-15 12:25:56.273382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003434 cdw11:00000000 00:09:01.363 [2024-07-15 12:25:56.273395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.363 [2024-07-15 12:25:56.273444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003434 cdw11:00000000 00:09:01.363 [2024-07-15 12:25:56.273457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.363 [2024-07-15 12:25:56.273506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00003401 cdw11:00000000 00:09:01.363 [2024-07-15 12:25:56.273519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.363 [2024-07-15 12:25:56.273572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.363 [2024-07-15 12:25:56.273586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.363 #30 NEW cov: 12078 ft: 14108 corp: 24/194b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:09:01.363 [2024-07-15 12:25:56.323427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 00:09:01.363 [2024-07-15 12:25:56.323451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.363 [2024-07-15 12:25:56.323521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.323545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.323595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.323609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.323658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009f12 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.323672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.323723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00009f9f cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.323736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.364 #31 NEW cov: 12078 ft: 14111 corp: 25/204b lim: 10 exec/s: 31 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:09:01.364 [2024-07-15 12:25:56.373473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.373497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.373575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f40 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.373590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.373640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.373654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.373706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000003a cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.373719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.364 #32 NEW cov: 12078 ft: 14114 corp: 26/213b lim: 10 exec/s: 32 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:09:01.364 [2024-07-15 12:25:56.423611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000011f cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.423635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.423701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.423716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.423766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.423780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.423831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.423844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.364 #33 NEW cov: 12078 ft: 14127 corp: 27/222b lim: 10 exec/s: 33 rss: 74Mb L: 9/10 MS: 1 PersAutoDict- DE: "\001\037"- 00:09:01.364 [2024-07-15 12:25:56.473727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.473751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.473820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.473834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.473884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.473897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.364 [2024-07-15 12:25:56.473948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009f9f cdw11:00000000 00:09:01.364 [2024-07-15 12:25:56.473961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.624 #34 NEW cov: 12078 ft: 14132 corp: 28/230b lim: 10 exec/s: 34 rss: 74Mb L: 8/10 MS: 1 EraseBytes- 00:09:01.624 [2024-07-15 12:25:56.513832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.513856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.513909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.513923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.513973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.513986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.514035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001f00 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.514048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.624 #35 NEW cov: 12078 ft: 14174 corp: 29/238b lim: 10 exec/s: 35 rss: 74Mb L: 8/10 MS: 1 ShuffleBytes- 00:09:01.624 [2024-07-15 12:25:56.554105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.554129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.554200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.554214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.554266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.554279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.554330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.554342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.554395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.554408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.624 #36 NEW cov: 12078 ft: 14209 corp: 30/248b lim: 10 exec/s: 36 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:09:01.624 [2024-07-15 12:25:56.593938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.593961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.594029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.594043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.594090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.594104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.624 #37 NEW cov: 12078 ft: 14245 corp: 31/255b lim: 10 exec/s: 37 rss: 74Mb L: 7/10 MS: 1 ChangeBit- 00:09:01.624 [2024-07-15 12:25:56.634213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.634236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.634304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.634317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.634364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c400 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.634378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.634426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.634440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.624 #38 NEW cov: 12078 ft: 14312 corp: 32/264b lim: 10 exec/s: 38 rss: 74Mb L: 9/10 MS: 1 ChangeByte- 00:09:01.624 [2024-07-15 12:25:56.674214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.674240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.674307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.674321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.674370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.674383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.624 #39 NEW cov: 12078 ft: 14325 corp: 33/270b lim: 10 exec/s: 39 rss: 74Mb L: 6/10 MS: 1 ShuffleBytes- 00:09:01.624 [2024-07-15 12:25:56.714455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.714480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.624 [2024-07-15 12:25:56.714551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.624 [2024-07-15 12:25:56.714565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.625 [2024-07-15 12:25:56.714613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:09:01.625 [2024-07-15 12:25:56.714638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.625 [2024-07-15 12:25:56.714685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fd00 cdw11:00000000 00:09:01.625 [2024-07-15 12:25:56.714698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.625 #40 NEW cov: 12078 ft: 14340 corp: 34/278b lim: 10 exec/s: 40 rss: 74Mb L: 8/10 MS: 1 ChangeByte- 00:09:01.884 [2024-07-15 12:25:56.754593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.754618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.884 [2024-07-15 12:25:56.754668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.754681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.884 [2024-07-15 12:25:56.754732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.754745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.884 [2024-07-15 12:25:56.754794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000fd00 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.754807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.884 #41 NEW cov: 12078 ft: 14361 corp: 35/286b lim: 10 exec/s: 41 rss: 74Mb L: 8/10 MS: 1 ChangeBit- 00:09:01.884 [2024-07-15 12:25:56.804635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.804659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.884 [2024-07-15 12:25:56.804727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f50 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.804741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.884 [2024-07-15 12:25:56.804790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.884 [2024-07-15 12:25:56.804803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.804853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.804867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.885 #42 NEW cov: 12078 ft: 14368 corp: 36/295b lim: 10 exec/s: 42 rss: 74Mb L: 9/10 MS: 1 ChangeBit- 00:09:01.885 [2024-07-15 12:25:56.844948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff00 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.844972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.845041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000025 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.845054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.845107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.845120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.845173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000001ff cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.845186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.845234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00001f00 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.845247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.885 #43 NEW cov: 12078 ft: 14398 corp: 37/305b lim: 10 exec/s: 43 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:01.885 [2024-07-15 12:25:56.895073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.895096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.895165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.895178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.895229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c48d cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.895242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.895293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.895307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.895356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.895369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.885 #44 NEW cov: 12078 ft: 14402 corp: 38/315b lim: 10 exec/s: 44 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:09:01.885 [2024-07-15 12:25:56.945158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000001ff cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.945182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.945249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000011f cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.945263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.945312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c48d cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.945326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.945378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.945391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.885 [2024-07-15 12:25:56.945440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.885 [2024-07-15 12:25:56.945454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.885 #45 NEW cov: 12078 ft: 14410 corp: 39/325b lim: 10 exec/s: 22 rss: 74Mb L: 10/10 MS: 1 PersAutoDict- DE: "\001\037"- 00:09:01.885 #45 DONE cov: 12078 ft: 14410 corp: 39/325b lim: 10 exec/s: 22 rss: 74Mb 00:09:01.885 ###### Recommended dictionary. ###### 00:09:01.885 "\001\037" # Uses: 3 00:09:01.885 ###### End of recommended dictionary. ###### 00:09:01.885 Done 45 runs in 2 second(s) 00:09:02.144 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:02.145 12:25:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:09:02.145 [2024-07-15 12:25:57.163463] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:02.145 [2024-07-15 12:25:57.163542] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160580 ] 00:09:02.145 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.403 [2024-07-15 12:25:57.369298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.403 [2024-07-15 12:25:57.442545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.403 [2024-07-15 12:25:57.502210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.403 [2024-07-15 12:25:57.518413] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:09:02.403 INFO: Running with entropic power schedule (0xFF, 100). 00:09:02.403 INFO: Seed: 3447125662 00:09:02.683 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:02.683 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:02.683 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:02.683 INFO: A corpus is not provided, starting from an empty corpus 00:09:02.683 #2 INITED exec/s: 0 rss: 65Mb 00:09:02.683 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:02.683 This may also happen if the target rejected all inputs we tried so far 00:09:02.683 [2024-07-15 12:25:57.588650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:02.683 [2024-07-15 12:25:57.588696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.970 NEW_FUNC[1/694]: 0x48f380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:09:02.970 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:02.970 #3 NEW cov: 11834 ft: 11835 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:09:02.970 [2024-07-15 12:25:57.940688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:02.970 [2024-07-15 12:25:57.940743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:57.940858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:02.970 [2024-07-15 12:25:57.940882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:57.940995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:02.970 [2024-07-15 12:25:57.941014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:57.941097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:02.970 [2024-07-15 12:25:57.941116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:57.941206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:02.970 [2024-07-15 12:25:57.941224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.970 #4 NEW cov: 11964 ft: 12742 corp: 3/13b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:09:02.970 [2024-07-15 12:25:58.009772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:02.970 [2024-07-15 12:25:58.009799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.970 #10 NEW cov: 11970 ft: 12987 corp: 4/15b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 CrossOver- 00:09:02.970 [2024-07-15 12:25:58.061108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:02.970 [2024-07-15 12:25:58.061134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:58.061226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004e4e cdw11:00000000 00:09:02.970 [2024-07-15 12:25:58.061242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:58.061330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00004e4e cdw11:00000000 00:09:02.970 [2024-07-15 12:25:58.061346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:58.061429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00004e4e cdw11:00000000 00:09:02.970 [2024-07-15 12:25:58.061444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.970 [2024-07-15 12:25:58.061531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00004e4e cdw11:00000000 00:09:02.970 [2024-07-15 12:25:58.061563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.970 #11 NEW cov: 12055 ft: 13297 corp: 5/25b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:09:03.228 [2024-07-15 12:25:58.121753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:03.228 [2024-07-15 12:25:58.121779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.228 [2024-07-15 12:25:58.121862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.228 [2024-07-15 12:25:58.121878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.228 [2024-07-15 12:25:58.121959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.228 [2024-07-15 12:25:58.121973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.228 [2024-07-15 12:25:58.122053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.228 [2024-07-15 12:25:58.122071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.228 [2024-07-15 12:25:58.122158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:03.228 [2024-07-15 12:25:58.122173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.228 #12 NEW cov: 12055 ft: 13364 corp: 6/35b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:09:03.228 [2024-07-15 12:25:58.171060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2d cdw11:00000000 00:09:03.228 [2024-07-15 12:25:58.171089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.228 #13 NEW cov: 12055 ft: 13449 corp: 7/37b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 InsertByte- 00:09:03.228 [2024-07-15 12:25:58.222479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.222504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.222586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.222614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.222704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.222719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.222802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.222820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.222906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.222923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.229 #14 NEW cov: 12055 ft: 13500 corp: 8/47b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:03.229 [2024-07-15 12:25:58.282843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.282871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.282963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.282978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.283068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.283085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.283177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.283193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.283273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.283290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.229 #15 NEW cov: 12055 ft: 13520 corp: 9/57b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:09:03.229 [2024-07-15 12:25:58.342445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.342470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.229 [2024-07-15 12:25:58.342567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.229 [2024-07-15 12:25:58.342583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.488 #16 NEW cov: 12055 ft: 13715 corp: 10/62b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:09:03.488 [2024-07-15 12:25:58.403754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.403781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.403874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.403890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.403985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.404001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.404082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.404098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.404184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.404201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.488 #17 NEW cov: 12055 ft: 13785 corp: 11/72b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:09:03.488 [2024-07-15 12:25:58.453618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.453647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.453734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.453749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.453836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.453854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.453938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.453954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.488 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:03.488 #19 NEW cov: 12078 ft: 13831 corp: 12/81b lim: 10 exec/s: 0 rss: 73Mb L: 9/10 MS: 2 CrossOver-PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:03.488 [2024-07-15 12:25:58.524417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fff7 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.524444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.524538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.524554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.524637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.524654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.524743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.524761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.524855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.524871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.488 #20 NEW cov: 12078 ft: 13849 corp: 13/91b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeBit- 00:09:03.488 [2024-07-15 12:25:58.584411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.584437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.584522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.584541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.488 [2024-07-15 12:25:58.584630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.488 [2024-07-15 12:25:58.584647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.488 #21 NEW cov: 12078 ft: 14003 corp: 14/98b lim: 10 exec/s: 21 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:09:03.746 [2024-07-15 12:25:58.635084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.635115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.635207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.635224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.635310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.635326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.635415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.635432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.635521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.635543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.747 #22 NEW cov: 12078 ft: 14019 corp: 15/108b lim: 10 exec/s: 22 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:03.747 [2024-07-15 12:25:58.704842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.704870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.704958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.704976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.705061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.705077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.747 #23 NEW cov: 12078 ft: 14036 corp: 16/114b lim: 10 exec/s: 23 rss: 73Mb L: 6/10 MS: 1 CrossOver- 00:09:03.747 [2024-07-15 12:25:58.755608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.755636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.755719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.755736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.755826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.755843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.755923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.755940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.756028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.756044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:03.747 #24 NEW cov: 12078 ft: 14066 corp: 17/124b lim: 10 exec/s: 24 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:03.747 [2024-07-15 12:25:58.825360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.825390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.825493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.825511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.747 [2024-07-15 12:25:58.825613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:03.747 [2024-07-15 12:25:58.825631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.747 #25 NEW cov: 12078 ft: 14137 corp: 18/131b lim: 10 exec/s: 25 rss: 73Mb L: 7/10 MS: 1 EraseBytes- 00:09:04.005 [2024-07-15 12:25:58.895355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000000ff cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.895384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:58.895468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.895486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.005 #26 NEW cov: 12078 ft: 14158 corp: 19/135b lim: 10 exec/s: 26 rss: 73Mb L: 4/10 MS: 1 EraseBytes- 00:09:04.005 [2024-07-15 12:25:58.956639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.956669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:58.956756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.956772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:58.956861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.956877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:58.956956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.956973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:58.957056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:58.957073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.005 #27 NEW cov: 12078 ft: 14199 corp: 20/145b lim: 10 exec/s: 27 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:09:04.005 [2024-07-15 12:25:59.016668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:59.016694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:59.016774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:59.016791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:59.016871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:59.016887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:59.016969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:59.016986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.005 [2024-07-15 12:25:59.017071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:09:04.005 [2024-07-15 12:25:59.017088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.005 #28 NEW cov: 12078 ft: 14222 corp: 21/155b lim: 10 exec/s: 28 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:09:04.005 [2024-07-15 12:25:59.066792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.005 [2024-07-15 12:25:59.066818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.006 [2024-07-15 12:25:59.066913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.006 [2024-07-15 12:25:59.066929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.006 [2024-07-15 12:25:59.067012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.006 [2024-07-15 12:25:59.067029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.006 [2024-07-15 12:25:59.067113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.006 [2024-07-15 12:25:59.067129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.006 [2024-07-15 12:25:59.067205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:09:04.006 [2024-07-15 12:25:59.067222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.006 #29 NEW cov: 12078 ft: 14237 corp: 22/165b lim: 10 exec/s: 29 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:04.006 [2024-07-15 12:25:59.126291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000032 cdw11:00000000 00:09:04.006 [2024-07-15 12:25:59.126316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.006 [2024-07-15 12:25:59.126398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.006 [2024-07-15 12:25:59.126424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.264 #30 NEW cov: 12078 ft: 14263 corp: 23/170b lim: 10 exec/s: 30 rss: 73Mb L: 5/10 MS: 1 InsertByte- 00:09:04.264 [2024-07-15 12:25:59.187329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fff7 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.187354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.187432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.187448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.187531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.187550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.187644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.187660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.187744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.187761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.264 #31 NEW cov: 12078 ft: 14278 corp: 24/180b lim: 10 exec/s: 31 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:04.264 [2024-07-15 12:25:59.237782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fff7 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.237808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.237902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.237917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.237999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff24 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.238015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.238097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.238114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.238200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.238216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.264 #32 NEW cov: 12078 ft: 14288 corp: 25/190b lim: 10 exec/s: 32 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:09:04.264 [2024-07-15 12:25:59.297997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.298022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.298104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.298120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.298208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.298224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.298308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.298325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.264 [2024-07-15 12:25:59.298403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:09:04.264 [2024-07-15 12:25:59.298419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.265 #33 NEW cov: 12078 ft: 14307 corp: 26/200b lim: 10 exec/s: 33 rss: 73Mb L: 10/10 MS: 1 ChangeBit- 00:09:04.265 [2024-07-15 12:25:59.348339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.265 [2024-07-15 12:25:59.348365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.265 [2024-07-15 12:25:59.348455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.265 [2024-07-15 12:25:59.348471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.265 [2024-07-15 12:25:59.348569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.265 [2024-07-15 12:25:59.348585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.265 [2024-07-15 12:25:59.348669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:04.265 [2024-07-15 12:25:59.348686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.265 [2024-07-15 12:25:59.348765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:04.265 [2024-07-15 12:25:59.348781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.265 #34 NEW cov: 12078 ft: 14347 corp: 27/210b lim: 10 exec/s: 34 rss: 73Mb L: 10/10 MS: 1 CopyPart- 00:09:04.524 [2024-07-15 12:25:59.398037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.398063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.398156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.398173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.398262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00001000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.398278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.524 #35 NEW cov: 12078 ft: 14419 corp: 28/217b lim: 10 exec/s: 35 rss: 73Mb L: 7/10 MS: 1 ChangeBit- 00:09:04.524 [2024-07-15 12:25:59.458437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.458462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.458551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.458567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.458653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.458668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.458745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.458762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.524 #36 NEW cov: 12078 ft: 14445 corp: 29/226b lim: 10 exec/s: 36 rss: 74Mb L: 9/10 MS: 1 ShuffleBytes- 00:09:04.524 [2024-07-15 12:25:59.518998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.519023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.519112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.519128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.519215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.519232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.519313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000080 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.519329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.519419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.519436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.524 #37 NEW cov: 12078 ft: 14455 corp: 30/236b lim: 10 exec/s: 37 rss: 74Mb L: 10/10 MS: 1 ChangeBit- 00:09:04.524 [2024-07-15 12:25:59.569120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.569145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.569240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.569255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.569336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.569353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.569437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002300 cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.569454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.524 [2024-07-15 12:25:59.569546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000010a cdw11:00000000 00:09:04.524 [2024-07-15 12:25:59.569575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:04.524 #38 NEW cov: 12078 ft: 14468 corp: 31/246b lim: 10 exec/s: 19 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:09:04.524 #38 DONE cov: 12078 ft: 14468 corp: 31/246b lim: 10 exec/s: 19 rss: 74Mb 00:09:04.524 ###### Recommended dictionary. ###### 00:09:04.524 "\000\000\000\000\000\000\000\000" # Uses: 3 00:09:04.524 ###### End of recommended dictionary. ###### 00:09:04.524 Done 38 runs in 2 second(s) 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:04.783 12:25:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:09:04.783 [2024-07-15 12:25:59.765669] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:04.783 [2024-07-15 12:25:59.765724] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160942 ] 00:09:04.783 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.041 [2024-07-15 12:25:59.969212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.041 [2024-07-15 12:26:00.049078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.041 [2024-07-15 12:26:00.109056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.041 [2024-07-15 12:26:00.125265] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:09:05.041 INFO: Running with entropic power schedule (0xFF, 100). 00:09:05.041 INFO: Seed: 1761179005 00:09:05.041 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:05.041 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:05.041 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:05.041 INFO: A corpus is not provided, starting from an empty corpus 00:09:05.300 [2024-07-15 12:26:00.190653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.300 [2024-07-15 12:26:00.190686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.300 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 71Mb 00:09:05.300 [2024-07-15 12:26:00.230629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.300 [2024-07-15 12:26:00.230657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.300 #3 NEW cov: 11992 ft: 12364 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ChangeByte- 00:09:05.300 [2024-07-15 12:26:00.281386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.281416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.281473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.281487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.281559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.281574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.281631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.281644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.281698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.281711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.301 #4 NEW cov: 11998 ft: 13438 corp: 3/7b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:05.301 [2024-07-15 12:26:00.331532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.331558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.331632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.331647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.331702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.331715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.331769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.331782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.331836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.331849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.301 #5 NEW cov: 12083 ft: 13726 corp: 4/12b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:09:05.301 [2024-07-15 12:26:00.381703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.381728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.381802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.381820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.381872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.381886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.381941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.381954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.301 [2024-07-15 12:26:00.382008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.382022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.301 #6 NEW cov: 12083 ft: 13798 corp: 5/17b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:09:05.301 [2024-07-15 12:26:00.421171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.301 [2024-07-15 12:26:00.421196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.558 #7 NEW cov: 12083 ft: 13880 corp: 6/18b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:09:05.558 [2024-07-15 12:26:00.461293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.558 [2024-07-15 12:26:00.461318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.558 #8 NEW cov: 12083 ft: 13977 corp: 7/19b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:05.558 [2024-07-15 12:26:00.501381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.558 [2024-07-15 12:26:00.501407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.558 #9 NEW cov: 12083 ft: 14011 corp: 8/20b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:09:05.558 [2024-07-15 12:26:00.551693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.558 [2024-07-15 12:26:00.551719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.558 [2024-07-15 12:26:00.551778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.558 [2024-07-15 12:26:00.551792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.558 #10 NEW cov: 12083 ft: 14244 corp: 9/22b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:09:05.558 [2024-07-15 12:26:00.592055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.558 [2024-07-15 12:26:00.592081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.558 [2024-07-15 12:26:00.592136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.558 [2024-07-15 12:26:00.592150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.558 [2024-07-15 12:26:00.592208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.592222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.559 [2024-07-15 12:26:00.592292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.592307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.559 #11 NEW cov: 12083 ft: 14317 corp: 10/26b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:09:05.559 [2024-07-15 12:26:00.642375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.642401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.559 [2024-07-15 12:26:00.642458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.642472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.559 [2024-07-15 12:26:00.642530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.642544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.559 [2024-07-15 12:26:00.642599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.642613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.559 [2024-07-15 12:26:00.642667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.559 [2024-07-15 12:26:00.642682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.559 #12 NEW cov: 12083 ft: 14333 corp: 11/31b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertByte- 00:09:05.816 [2024-07-15 12:26:00.691892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.816 [2024-07-15 12:26:00.691918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.816 #13 NEW cov: 12083 ft: 14377 corp: 12/32b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:05.816 [2024-07-15 12:26:00.732019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.816 [2024-07-15 12:26:00.732046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.816 #14 NEW cov: 12083 ft: 14406 corp: 13/33b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:09:05.816 [2024-07-15 12:26:00.772721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.816 [2024-07-15 12:26:00.772748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.816 [2024-07-15 12:26:00.772805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.816 [2024-07-15 12:26:00.772823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.816 [2024-07-15 12:26:00.772878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.816 [2024-07-15 12:26:00.772894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.816 [2024-07-15 12:26:00.772947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.816 [2024-07-15 12:26:00.772960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.816 [2024-07-15 12:26:00.773014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.773027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.817 #15 NEW cov: 12083 ft: 14436 corp: 14/38b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:09:05.817 [2024-07-15 12:26:00.812828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.812854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.812910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.812924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.812976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.812990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.813045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.813059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.813113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.813128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:05.817 #16 NEW cov: 12083 ft: 14448 corp: 15/43b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CrossOver- 00:09:05.817 [2024-07-15 12:26:00.862388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.862414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.817 #17 NEW cov: 12083 ft: 14465 corp: 16/44b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CopyPart- 00:09:05.817 [2024-07-15 12:26:00.902997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.903023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.903075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.903093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.903145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.903159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.903211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.903224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.817 #18 NEW cov: 12083 ft: 14499 corp: 17/48b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 ChangeBinInt- 00:09:05.817 [2024-07-15 12:26:00.942807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.942834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.817 [2024-07-15 12:26:00.942889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.817 [2024-07-15 12:26:00.942902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.075 #19 NEW cov: 12083 ft: 14514 corp: 18/50b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:09:06.075 [2024-07-15 12:26:00.982781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.075 [2024-07-15 12:26:00.982809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.075 #20 NEW cov: 12083 ft: 14515 corp: 19/51b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:09:06.075 [2024-07-15 12:26:01.023189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.075 [2024-07-15 12:26:01.023215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.075 [2024-07-15 12:26:01.023269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.075 [2024-07-15 12:26:01.023283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.075 [2024-07-15 12:26:01.023338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.075 [2024-07-15 12:26:01.023352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.333 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:06.333 #21 NEW cov: 12106 ft: 14687 corp: 20/54b lim: 5 exec/s: 21 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:09:06.333 [2024-07-15 12:26:01.364693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.333 [2024-07-15 12:26:01.364748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.333 [2024-07-15 12:26:01.364824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.333 [2024-07-15 12:26:01.364852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.333 [2024-07-15 12:26:01.364922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.333 [2024-07-15 12:26:01.364944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.333 [2024-07-15 12:26:01.365016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.333 [2024-07-15 12:26:01.365038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.333 [2024-07-15 12:26:01.365108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.333 [2024-07-15 12:26:01.365130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:06.333 #22 NEW cov: 12106 ft: 14777 corp: 21/59b lim: 5 exec/s: 22 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:06.333 [2024-07-15 12:26:01.424016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.333 [2024-07-15 12:26:01.424045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.333 #23 NEW cov: 12106 ft: 14858 corp: 22/60b lim: 5 exec/s: 23 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:06.592 [2024-07-15 12:26:01.474148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.474176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.592 #24 NEW cov: 12106 ft: 14891 corp: 23/61b lim: 5 exec/s: 24 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:06.592 [2024-07-15 12:26:01.524918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.524946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.525004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.525018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.525074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.525089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.525144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.525157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.525213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.525227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:06.592 #25 NEW cov: 12106 ft: 14898 corp: 24/66b lim: 5 exec/s: 25 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:09:06.592 [2024-07-15 12:26:01.574423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.574451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.592 #26 NEW cov: 12106 ft: 14919 corp: 25/67b lim: 5 exec/s: 26 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:09:06.592 [2024-07-15 12:26:01.624543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.624571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.592 #27 NEW cov: 12106 ft: 14967 corp: 26/68b lim: 5 exec/s: 27 rss: 74Mb L: 1/5 MS: 1 CrossOver- 00:09:06.592 [2024-07-15 12:26:01.675163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.675190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.675245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.675259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.675316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.675331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.592 [2024-07-15 12:26:01.675388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.592 [2024-07-15 12:26:01.675401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.592 #28 NEW cov: 12106 ft: 14975 corp: 27/72b lim: 5 exec/s: 28 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:09:06.858 [2024-07-15 12:26:01.725315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.725341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.725396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.725410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.725465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.725479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.725537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.725551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.858 #29 NEW cov: 12106 ft: 15066 corp: 28/76b lim: 5 exec/s: 29 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:09:06.858 [2024-07-15 12:26:01.775147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.775181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.775242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.775258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.858 #30 NEW cov: 12106 ft: 15094 corp: 29/78b lim: 5 exec/s: 30 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:09:06.858 [2024-07-15 12:26:01.815537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.815563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.815620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.815634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.815690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.815704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.815759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.815772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.858 #31 NEW cov: 12106 ft: 15101 corp: 30/82b lim: 5 exec/s: 31 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:09:06.858 [2024-07-15 12:26:01.855848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.855875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.855934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.855949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.856006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.856021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.856078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.856093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.858 [2024-07-15 12:26:01.856146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.858 [2024-07-15 12:26:01.856161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:06.859 #32 NEW cov: 12106 ft: 15113 corp: 31/87b lim: 5 exec/s: 32 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:06.859 [2024-07-15 12:26:01.895647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.895676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.859 [2024-07-15 12:26:01.895733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.895747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.859 [2024-07-15 12:26:01.895803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.895835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.859 #33 NEW cov: 12106 ft: 15122 corp: 32/90b lim: 5 exec/s: 33 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:09:06.859 [2024-07-15 12:26:01.935430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.935455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.859 #34 NEW cov: 12106 ft: 15148 corp: 33/91b lim: 5 exec/s: 34 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:09:06.859 [2024-07-15 12:26:01.986209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.986236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.859 [2024-07-15 12:26:01.986296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.986311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.859 [2024-07-15 12:26:01.986369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.986385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.859 [2024-07-15 12:26:01.986442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.986455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.859 [2024-07-15 12:26:01.986512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.859 [2024-07-15 12:26:01.986532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:07.117 #35 NEW cov: 12106 ft: 15152 corp: 34/96b lim: 5 exec/s: 35 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:07.117 [2024-07-15 12:26:02.025835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.025861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.025920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.025935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.117 #36 NEW cov: 12106 ft: 15163 corp: 35/98b lim: 5 exec/s: 36 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:09:07.117 [2024-07-15 12:26:02.075975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.076002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.076060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.076075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.117 #37 NEW cov: 12106 ft: 15173 corp: 36/100b lim: 5 exec/s: 37 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:09:07.117 [2024-07-15 12:26:02.116609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.116634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.116693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.116710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.116768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.116783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.116838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.116851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.116907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.116921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.166720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.166746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.166803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.166816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.166872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.166887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.166943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.166956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.117 [2024-07-15 12:26:02.167013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.117 [2024-07-15 12:26:02.167044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:07.117 #39 NEW cov: 12106 ft: 15180 corp: 37/105b lim: 5 exec/s: 19 rss: 75Mb L: 5/5 MS: 2 ChangeBit-CopyPart- 00:09:07.117 #39 DONE cov: 12106 ft: 15180 corp: 37/105b lim: 5 exec/s: 19 rss: 75Mb 00:09:07.117 Done 39 runs in 2 second(s) 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:07.375 12:26:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:09:07.375 [2024-07-15 12:26:02.355516] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:07.375 [2024-07-15 12:26:02.355578] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161301 ] 00:09:07.375 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.633 [2024-07-15 12:26:02.535717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.633 [2024-07-15 12:26:02.608518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.633 [2024-07-15 12:26:02.667966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.633 [2024-07-15 12:26:02.684183] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:09:07.633 INFO: Running with entropic power schedule (0xFF, 100). 00:09:07.633 INFO: Seed: 25203359 00:09:07.633 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:07.633 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:07.633 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:07.633 INFO: A corpus is not provided, starting from an empty corpus 00:09:07.633 [2024-07-15 12:26:02.749575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.633 [2024-07-15 12:26:02.749607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.890 #2 INITED cov: 11862 ft: 11863 corp: 1/1b exec/s: 0 rss: 70Mb 00:09:07.890 [2024-07-15 12:26:02.789616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.789646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.890 #3 NEW cov: 11992 ft: 12552 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ShuffleBytes- 00:09:07.890 [2024-07-15 12:26:02.839909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.839937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.890 [2024-07-15 12:26:02.839994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.840010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.890 #4 NEW cov: 11998 ft: 13375 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:09:07.890 [2024-07-15 12:26:02.879863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.879890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.890 #5 NEW cov: 12083 ft: 13610 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 EraseBytes- 00:09:07.890 [2024-07-15 12:26:02.930187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.930214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.890 [2024-07-15 12:26:02.930271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.930285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.890 #6 NEW cov: 12083 ft: 13722 corp: 5/7b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:09:07.890 [2024-07-15 12:26:02.970135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.890 [2024-07-15 12:26:02.970161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.890 #7 NEW cov: 12083 ft: 13765 corp: 6/8b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeByte- 00:09:08.147 [2024-07-15 12:26:03.020430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.020458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.147 [2024-07-15 12:26:03.020518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.020543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.147 #8 NEW cov: 12083 ft: 13825 corp: 7/10b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ShuffleBytes- 00:09:08.147 [2024-07-15 12:26:03.070714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.070739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.147 [2024-07-15 12:26:03.070794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.070809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.147 [2024-07-15 12:26:03.070866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.070881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.147 #9 NEW cov: 12083 ft: 14063 corp: 8/13b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CrossOver- 00:09:08.147 [2024-07-15 12:26:03.110521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.110552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.147 #10 NEW cov: 12083 ft: 14169 corp: 9/14b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 CrossOver- 00:09:08.147 [2024-07-15 12:26:03.150621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.150648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.147 #11 NEW cov: 12083 ft: 14228 corp: 10/15b lim: 5 exec/s: 0 rss: 71Mb L: 1/3 MS: 1 ChangeBit- 00:09:08.147 [2024-07-15 12:26:03.200935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.200962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.147 [2024-07-15 12:26:03.201019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.201034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.147 #12 NEW cov: 12083 ft: 14244 corp: 11/17b lim: 5 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 ChangeBinInt- 00:09:08.147 [2024-07-15 12:26:03.240974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.241000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.147 [2024-07-15 12:26:03.241054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.147 [2024-07-15 12:26:03.241069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.147 #13 NEW cov: 12083 ft: 14306 corp: 12/19b lim: 5 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 CopyPart- 00:09:08.403 [2024-07-15 12:26:03.281134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.281159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.403 [2024-07-15 12:26:03.281217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.281232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.403 #14 NEW cov: 12083 ft: 14351 corp: 13/21b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 ChangeBit- 00:09:08.403 [2024-07-15 12:26:03.331115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.331142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.403 #15 NEW cov: 12083 ft: 14424 corp: 14/22b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 EraseBytes- 00:09:08.403 [2024-07-15 12:26:03.371248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.371274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.403 #16 NEW cov: 12083 ft: 14483 corp: 15/23b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ChangeBit- 00:09:08.403 [2024-07-15 12:26:03.421324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.421350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.403 #17 NEW cov: 12083 ft: 14486 corp: 16/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 ShuffleBytes- 00:09:08.403 [2024-07-15 12:26:03.471958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.471984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.403 [2024-07-15 12:26:03.472038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.403 [2024-07-15 12:26:03.472053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.403 [2024-07-15 12:26:03.472106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.404 [2024-07-15 12:26:03.472121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.404 [2024-07-15 12:26:03.472173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.404 [2024-07-15 12:26:03.472186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.404 #18 NEW cov: 12083 ft: 14770 corp: 17/28b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:08.404 [2024-07-15 12:26:03.522067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.404 [2024-07-15 12:26:03.522093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.404 [2024-07-15 12:26:03.522148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.404 [2024-07-15 12:26:03.522162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.404 [2024-07-15 12:26:03.522216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.404 [2024-07-15 12:26:03.522233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.404 [2024-07-15 12:26:03.522287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.404 [2024-07-15 12:26:03.522300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.700 #19 NEW cov: 12083 ft: 14854 corp: 18/32b lim: 5 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 InsertByte- 00:09:08.700 [2024-07-15 12:26:03.571909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.571936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.700 [2024-07-15 12:26:03.571994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.572009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.700 #20 NEW cov: 12083 ft: 14879 corp: 19/34b lim: 5 exec/s: 0 rss: 72Mb L: 2/4 MS: 1 InsertByte- 00:09:08.700 [2024-07-15 12:26:03.622511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.622543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.700 [2024-07-15 12:26:03.622601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.622615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.700 [2024-07-15 12:26:03.622670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.622700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.700 [2024-07-15 12:26:03.622754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.622769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.700 [2024-07-15 12:26:03.622825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.700 [2024-07-15 12:26:03.622841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:08.957 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:08.957 #21 NEW cov: 12106 ft: 14973 corp: 20/39b lim: 5 exec/s: 21 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:09:08.957 [2024-07-15 12:26:03.963061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.957 [2024-07-15 12:26:03.963126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.957 #22 NEW cov: 12106 ft: 15125 corp: 21/40b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:09:08.957 [2024-07-15 12:26:04.012891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.957 [2024-07-15 12:26:04.012923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.957 #23 NEW cov: 12106 ft: 15174 corp: 22/41b lim: 5 exec/s: 23 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:08.957 [2024-07-15 12:26:04.053481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.957 [2024-07-15 12:26:04.053510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.957 [2024-07-15 12:26:04.053572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.957 [2024-07-15 12:26:04.053589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.957 [2024-07-15 12:26:04.053643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.957 [2024-07-15 12:26:04.053659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.957 [2024-07-15 12:26:04.053715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.957 [2024-07-15 12:26:04.053731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.215 #24 NEW cov: 12106 ft: 15201 corp: 23/45b lim: 5 exec/s: 24 rss: 73Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:09:09.215 [2024-07-15 12:26:04.103627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.103655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.103713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.103728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.103783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.103798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.103854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.103867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.215 #25 NEW cov: 12106 ft: 15203 corp: 24/49b lim: 5 exec/s: 25 rss: 73Mb L: 4/5 MS: 1 ChangeBinInt- 00:09:09.215 [2024-07-15 12:26:04.153280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.153307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.215 #26 NEW cov: 12106 ft: 15233 corp: 25/50b lim: 5 exec/s: 26 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:09:09.215 [2024-07-15 12:26:04.193370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.193397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.215 #27 NEW cov: 12106 ft: 15265 corp: 26/51b lim: 5 exec/s: 27 rss: 73Mb L: 1/5 MS: 1 EraseBytes- 00:09:09.215 [2024-07-15 12:26:04.243739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.243768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.243824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.243840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.215 #28 NEW cov: 12106 ft: 15272 corp: 27/53b lim: 5 exec/s: 28 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:09:09.215 [2024-07-15 12:26:04.284139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.284167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.284221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.284236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.284293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.284309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.284363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.284377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.215 #29 NEW cov: 12106 ft: 15285 corp: 28/57b lim: 5 exec/s: 29 rss: 73Mb L: 4/5 MS: 1 ShuffleBytes- 00:09:09.215 [2024-07-15 12:26:04.333970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.333997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.215 [2024-07-15 12:26:04.334053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.215 [2024-07-15 12:26:04.334066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.473 #30 NEW cov: 12106 ft: 15308 corp: 29/59b lim: 5 exec/s: 30 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:09:09.473 [2024-07-15 12:26:04.384425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.384453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.384508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.384522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.384582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.384600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.384653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.384667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.473 #31 NEW cov: 12106 ft: 15323 corp: 30/63b lim: 5 exec/s: 31 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:09:09.473 [2024-07-15 12:26:04.424214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.424240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.424298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.424313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.473 #32 NEW cov: 12106 ft: 15338 corp: 31/65b lim: 5 exec/s: 32 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:09:09.473 [2024-07-15 12:26:04.474335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.474362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.474418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.474433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.473 #33 NEW cov: 12106 ft: 15370 corp: 32/67b lim: 5 exec/s: 33 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:09:09.473 [2024-07-15 12:26:04.524796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.524822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.524878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.524893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.524946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.524978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.525035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.525049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.473 #34 NEW cov: 12106 ft: 15381 corp: 33/71b lim: 5 exec/s: 34 rss: 74Mb L: 4/5 MS: 1 CMP- DE: "\001\000\000\037"- 00:09:09.473 [2024-07-15 12:26:04.564749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.564775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.564836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.564851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.473 [2024-07-15 12:26:04.564906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.473 [2024-07-15 12:26:04.564920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.473 #35 NEW cov: 12106 ft: 15388 corp: 34/74b lim: 5 exec/s: 35 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:09:09.730 [2024-07-15 12:26:04.605034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.605061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.605120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.605135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.605203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.605218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.605272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.605286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.730 #36 NEW cov: 12106 ft: 15398 corp: 35/78b lim: 5 exec/s: 36 rss: 74Mb L: 4/5 MS: 1 ChangeBit- 00:09:09.730 [2024-07-15 12:26:04.655343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.655369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.655427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.655442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.655495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.655511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.655567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.655581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.730 [2024-07-15 12:26:04.655637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.655652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:09.730 #37 NEW cov: 12106 ft: 15404 corp: 36/83b lim: 5 exec/s: 37 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:09:09.730 [2024-07-15 12:26:04.705019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.730 [2024-07-15 12:26:04.705045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.731 [2024-07-15 12:26:04.705102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.731 [2024-07-15 12:26:04.705116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.731 #38 NEW cov: 12106 ft: 15449 corp: 37/85b lim: 5 exec/s: 38 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:09:09.731 [2024-07-15 12:26:04.744970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.731 [2024-07-15 12:26:04.744996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.731 #39 NEW cov: 12106 ft: 15454 corp: 38/86b lim: 5 exec/s: 19 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:09:09.731 #39 DONE cov: 12106 ft: 15454 corp: 38/86b lim: 5 exec/s: 19 rss: 74Mb 00:09:09.731 ###### Recommended dictionary. ###### 00:09:09.731 "\001\000\000\037" # Uses: 0 00:09:09.731 ###### End of recommended dictionary. ###### 00:09:09.731 Done 39 runs in 2 second(s) 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:09.988 12:26:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:09:09.988 [2024-07-15 12:26:04.927773] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:09.988 [2024-07-15 12:26:04.927828] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161658 ] 00:09:09.988 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.988 [2024-07-15 12:26:05.104159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.245 [2024-07-15 12:26:05.177444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.245 [2024-07-15 12:26:05.236891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.245 [2024-07-15 12:26:05.253103] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:09:10.245 INFO: Running with entropic power schedule (0xFF, 100). 00:09:10.245 INFO: Seed: 2593202397 00:09:10.245 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:10.245 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:10.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:10.245 INFO: A corpus is not provided, starting from an empty corpus 00:09:10.245 #2 INITED exec/s: 0 rss: 64Mb 00:09:10.245 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:10.245 This may also happen if the target rejected all inputs we tried so far 00:09:10.245 [2024-07-15 12:26:05.298549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.245 [2024-07-15 12:26:05.298580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.245 [2024-07-15 12:26:05.298639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.245 [2024-07-15 12:26:05.298654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.503 NEW_FUNC[1/695]: 0x490cf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:09:10.503 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:10.503 #4 NEW cov: 11885 ft: 11877 corp: 2/24b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:10.761 [2024-07-15 12:26:05.639415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.639459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.639518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.639537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.761 #10 NEW cov: 12015 ft: 12434 corp: 3/47b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBit- 00:09:10.761 [2024-07-15 12:26:05.689506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.689543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.689604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.689632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.761 #11 NEW cov: 12021 ft: 12652 corp: 4/70b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:10.761 [2024-07-15 12:26:05.729598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.729632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.729693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.729709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.761 #12 NEW cov: 12106 ft: 12821 corp: 5/93b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBinInt- 00:09:10.761 [2024-07-15 12:26:05.769846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.769872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.769931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.769944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.770002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.770017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.761 #18 NEW cov: 12106 ft: 13210 corp: 6/120b lim: 40 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:09:10.761 [2024-07-15 12:26:05.819853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.819878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.819936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.819951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.761 #19 NEW cov: 12106 ft: 13310 corp: 7/143b lim: 40 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 ShuffleBytes- 00:09:10.761 [2024-07-15 12:26:05.859923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.859950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.761 [2024-07-15 12:26:05.860011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.761 [2024-07-15 12:26:05.860025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.761 #20 NEW cov: 12106 ft: 13383 corp: 8/166b lim: 40 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 ChangeByte- 00:09:11.019 [2024-07-15 12:26:05.900050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:05.900077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:05.900136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bbffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:05.900151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.019 #21 NEW cov: 12106 ft: 13487 corp: 9/189b lim: 40 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 ChangeByte- 00:09:11.019 [2024-07-15 12:26:05.950170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:05.950196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:05.950259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:05.950274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.019 #22 NEW cov: 12106 ft: 13510 corp: 10/212b lim: 40 exec/s: 0 rss: 73Mb L: 23/27 MS: 1 CopyPart- 00:09:11.019 [2024-07-15 12:26:06.000337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.000363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:06.000424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff2dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.000439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.019 #23 NEW cov: 12106 ft: 13577 corp: 11/235b lim: 40 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ChangeByte- 00:09:11.019 [2024-07-15 12:26:06.050640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.050666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:06.050725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.050740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:06.050796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffeecff cdw11:ffffff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.050810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.019 #24 NEW cov: 12106 ft: 13597 corp: 12/259b lim: 40 exec/s: 0 rss: 74Mb L: 24/27 MS: 1 InsertByte- 00:09:11.019 [2024-07-15 12:26:06.090643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.090671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:06.090731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff05ff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.090746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.019 #25 NEW cov: 12106 ft: 13618 corp: 13/282b lim: 40 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ChangeBinInt- 00:09:11.019 [2024-07-15 12:26:06.140751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.140776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.019 [2024-07-15 12:26:06.140836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffffe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.019 [2024-07-15 12:26:06.140854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.276 #26 NEW cov: 12106 ft: 13635 corp: 14/305b lim: 40 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ShuffleBytes- 00:09:11.276 [2024-07-15 12:26:06.181102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.181127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.276 [2024-07-15 12:26:06.181186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.181200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.276 [2024-07-15 12:26:06.181275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.181290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.276 [2024-07-15 12:26:06.181347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.181360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.276 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:11.276 #27 NEW cov: 12129 ft: 14127 corp: 15/341b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:09:11.276 [2024-07-15 12:26:06.221095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.221120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.276 [2024-07-15 12:26:06.221178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.221192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.276 [2024-07-15 12:26:06.221250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffeecff cdw11:ffffff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.276 [2024-07-15 12:26:06.221264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.276 #28 NEW cov: 12129 ft: 14151 corp: 16/365b lim: 40 exec/s: 0 rss: 74Mb L: 24/36 MS: 1 ChangeByte- 00:09:11.277 [2024-07-15 12:26:06.271240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff010200 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.271265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.277 [2024-07-15 12:26:06.271322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.271336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.277 [2024-07-15 12:26:06.271391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff01ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.271408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.277 #29 NEW cov: 12129 ft: 14213 corp: 17/392b lim: 40 exec/s: 29 rss: 74Mb L: 27/36 MS: 1 CMP- DE: "\001\002\000\000"- 00:09:11.277 [2024-07-15 12:26:06.311257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.311283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.277 [2024-07-15 12:26:06.311344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.311358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.277 #30 NEW cov: 12129 ft: 14227 corp: 18/415b lim: 40 exec/s: 30 rss: 74Mb L: 23/36 MS: 1 EraseBytes- 00:09:11.277 [2024-07-15 12:26:06.361408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.361433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.277 [2024-07-15 12:26:06.361492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffeffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.277 [2024-07-15 12:26:06.361507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.277 #31 NEW cov: 12129 ft: 14237 corp: 19/436b lim: 40 exec/s: 31 rss: 74Mb L: 21/36 MS: 1 EraseBytes- 00:09:11.535 [2024-07-15 12:26:06.411651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.411677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.411738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000048ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.411752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.411810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:bbffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.411824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.535 #32 NEW cov: 12129 ft: 14243 corp: 20/467b lim: 40 exec/s: 32 rss: 74Mb L: 31/36 MS: 1 CMP- DE: "\000\000\000\000\000\000\000H"- 00:09:11.535 [2024-07-15 12:26:06.461770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.461796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.461855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff9dffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.461869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.461925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffffeff cdw11:ffffff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.461940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.535 #33 NEW cov: 12129 ft: 14262 corp: 21/491b lim: 40 exec/s: 33 rss: 74Mb L: 24/36 MS: 1 InsertByte- 00:09:11.535 [2024-07-15 12:26:06.501896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.501922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.501981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.501996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.502053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff60 cdw11:ff5dff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.502068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.535 #34 NEW cov: 12129 ft: 14339 corp: 22/515b lim: 40 exec/s: 34 rss: 74Mb L: 24/36 MS: 1 InsertByte- 00:09:11.535 [2024-07-15 12:26:06.552016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.552041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.552101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff002789 cdw11:ed68d4ab SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.552116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.552173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ecffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.552188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.535 #35 NEW cov: 12129 ft: 14350 corp: 23/542b lim: 40 exec/s: 35 rss: 74Mb L: 27/36 MS: 1 CMP- DE: "\000'\211\355h\324\253\354"- 00:09:11.535 [2024-07-15 12:26:06.592160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.592186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.592244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.592259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.592317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:60ff5dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.592332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.535 #36 NEW cov: 12129 ft: 14354 corp: 24/567b lim: 40 exec/s: 36 rss: 74Mb L: 25/36 MS: 1 InsertByte- 00:09:11.535 [2024-07-15 12:26:06.642222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.642248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.642307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff17 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.642325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.535 [2024-07-15 12:26:06.642383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:60ff5dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.535 [2024-07-15 12:26:06.642397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.535 #37 NEW cov: 12129 ft: 14402 corp: 25/592b lim: 40 exec/s: 37 rss: 74Mb L: 25/36 MS: 1 InsertByte- 00:09:11.792 [2024-07-15 12:26:06.682260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff24ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.682285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.792 [2024-07-15 12:26:06.682347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.682362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.792 #38 NEW cov: 12129 ft: 14429 corp: 26/615b lim: 40 exec/s: 38 rss: 74Mb L: 23/36 MS: 1 ChangeByte- 00:09:11.792 [2024-07-15 12:26:06.722737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.722761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.792 [2024-07-15 12:26:06.722819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.722833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.792 [2024-07-15 12:26:06.722891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff01ff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.722906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.792 [2024-07-15 12:26:06.722965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.722979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.792 [2024-07-15 12:26:06.723056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000ff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.723071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:11.792 #39 NEW cov: 12129 ft: 14478 corp: 27/655b lim: 40 exec/s: 39 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:11.792 [2024-07-15 12:26:06.762631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ff010200 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.762657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.792 [2024-07-15 12:26:06.762715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.792 [2024-07-15 12:26:06.762729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.762787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff09ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.762805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.793 #40 NEW cov: 12129 ft: 14486 corp: 28/682b lim: 40 exec/s: 40 rss: 74Mb L: 27/40 MS: 1 ChangeBit- 00:09:11.793 [2024-07-15 12:26:06.812928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff4040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.812954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.813014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40404040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.813029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.813089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:40ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.813103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.813162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.813175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.793 #41 NEW cov: 12129 ft: 14492 corp: 29/716b lim: 40 exec/s: 41 rss: 75Mb L: 34/40 MS: 1 InsertRepeatedBytes- 00:09:11.793 [2024-07-15 12:26:06.852889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.852914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.852974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff17 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.852989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.853045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:60ff5dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.853059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.793 #42 NEW cov: 12129 ft: 14498 corp: 30/741b lim: 40 exec/s: 42 rss: 75Mb L: 25/40 MS: 1 ChangeBit- 00:09:11.793 [2024-07-15 12:26:06.902923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.902950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.793 [2024-07-15 12:26:06.903010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff05ff cdw11:ffffff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.793 [2024-07-15 12:26:06.903024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 #43 NEW cov: 12129 ft: 14506 corp: 31/764b lim: 40 exec/s: 43 rss: 75Mb L: 23/40 MS: 1 CrossOver- 00:09:12.051 [2024-07-15 12:26:06.953143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:06.953170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:06.953229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:06.953244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:06.953300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff78 cdw11:787878ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:06.953314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.051 #44 NEW cov: 12129 ft: 14528 corp: 32/791b lim: 40 exec/s: 44 rss: 75Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:09:12.051 [2024-07-15 12:26:06.993133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:06.993161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:06.993241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:bbffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:06.993256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 #45 NEW cov: 12129 ft: 14565 corp: 33/814b lim: 40 exec/s: 45 rss: 75Mb L: 23/40 MS: 1 ChangeBit- 00:09:12.051 [2024-07-15 12:26:07.033477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.033503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.033585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.033600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.033659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffffff7 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.033673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.033732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.033747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.051 #46 NEW cov: 12129 ft: 14583 corp: 34/850b lim: 40 exec/s: 46 rss: 75Mb L: 36/40 MS: 1 ChangeBinInt- 00:09:12.051 [2024-07-15 12:26:07.083534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.083558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.083631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fff7ffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.083644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.083703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff60 cdw11:ff5dff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.083720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.051 #47 NEW cov: 12129 ft: 14613 corp: 35/874b lim: 40 exec/s: 47 rss: 75Mb L: 24/40 MS: 1 CrossOver- 00:09:12.051 [2024-07-15 12:26:07.123517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.123546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.123607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffeff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.123621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 #48 NEW cov: 12129 ft: 14640 corp: 36/895b lim: 40 exec/s: 48 rss: 75Mb L: 21/40 MS: 1 ShuffleBytes- 00:09:12.051 [2024-07-15 12:26:07.173787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.173811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.173889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:fbffffff cdw11:ffffff0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.173903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.051 [2024-07-15 12:26:07.173964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffeecff cdw11:ffffff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.051 [2024-07-15 12:26:07.173978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.309 #49 NEW cov: 12129 ft: 14660 corp: 37/919b lim: 40 exec/s: 49 rss: 75Mb L: 24/40 MS: 1 ChangeBit- 00:09:12.309 [2024-07-15 12:26:07.223806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.309 [2024-07-15 12:26:07.223831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.309 [2024-07-15 12:26:07.223908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffff2c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.309 [2024-07-15 12:26:07.223923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.309 #50 NEW cov: 12129 ft: 14663 corp: 38/942b lim: 40 exec/s: 50 rss: 75Mb L: 23/40 MS: 1 ChangeByte- 00:09:12.309 [2024-07-15 12:26:07.264191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffb2b2 cdw11:b2b2b2b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.309 [2024-07-15 12:26:07.264216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.309 [2024-07-15 12:26:07.264292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:b2b2b2ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.309 [2024-07-15 12:26:07.264306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.309 [2024-07-15 12:26:07.264366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.309 [2024-07-15 12:26:07.264379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.309 [2024-07-15 12:26:07.264436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00ffffff cdw11:60ffff08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.309 [2024-07-15 12:26:07.264452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.309 #51 NEW cov: 12129 ft: 14676 corp: 39/974b lim: 40 exec/s: 25 rss: 75Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:09:12.309 #51 DONE cov: 12129 ft: 14676 corp: 39/974b lim: 40 exec/s: 25 rss: 75Mb 00:09:12.309 ###### Recommended dictionary. ###### 00:09:12.309 "\000\000\000\000" # Uses: 1 00:09:12.309 "\001\002\000\000" # Uses: 0 00:09:12.309 "\000\000\000\000\000\000\000H" # Uses: 0 00:09:12.309 "\000'\211\355h\324\253\354" # Uses: 0 00:09:12.309 ###### End of recommended dictionary. ###### 00:09:12.309 Done 51 runs in 2 second(s) 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:09:12.309 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:12.567 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:12.567 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:12.567 12:26:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:09:12.567 [2024-07-15 12:26:07.469694] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:12.567 [2024-07-15 12:26:07.469782] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162011 ] 00:09:12.567 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.567 [2024-07-15 12:26:07.657077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.825 [2024-07-15 12:26:07.730281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.825 [2024-07-15 12:26:07.789877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.825 [2024-07-15 12:26:07.806083] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:09:12.825 INFO: Running with entropic power schedule (0xFF, 100). 00:09:12.825 INFO: Seed: 851233885 00:09:12.825 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:12.825 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:12.825 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:12.825 INFO: A corpus is not provided, starting from an empty corpus 00:09:12.825 #2 INITED exec/s: 0 rss: 65Mb 00:09:12.825 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:12.825 This may also happen if the target rejected all inputs we tried so far 00:09:12.825 [2024-07-15 12:26:07.851085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.825 [2024-07-15 12:26:07.851122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.825 [2024-07-15 12:26:07.851159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.825 [2024-07-15 12:26:07.851175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.825 [2024-07-15 12:26:07.851206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.825 [2024-07-15 12:26:07.851222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.825 [2024-07-15 12:26:07.851253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.825 [2024-07-15 12:26:07.851268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.083 NEW_FUNC[1/696]: 0x492a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:09:13.083 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:13.083 #14 NEW cov: 11897 ft: 11894 corp: 2/37b lim: 40 exec/s: 0 rss: 71Mb L: 36/36 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:13.083 [2024-07-15 12:26:08.201934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.083 [2024-07-15 12:26:08.201985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.083 [2024-07-15 12:26:08.202022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.083 [2024-07-15 12:26:08.202039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.083 [2024-07-15 12:26:08.202071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.083 [2024-07-15 12:26:08.202087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.083 [2024-07-15 12:26:08.202117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.083 [2024-07-15 12:26:08.202132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.340 #15 NEW cov: 12027 ft: 12480 corp: 3/75b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:09:13.340 [2024-07-15 12:26:08.281980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.340 [2024-07-15 12:26:08.282017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.340 [2024-07-15 12:26:08.282067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.340 [2024-07-15 12:26:08.282088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.282119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2789ee55 cdw11:c2eaac00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.282134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.282165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.282180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.341 #16 NEW cov: 12033 ft: 12807 corp: 4/113b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CMP- DE: "\001'\211\356U\302\352\254"- 00:09:13.341 [2024-07-15 12:26:08.362172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.362207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.362258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.362274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.362305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.362320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.362351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00b30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.362367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.341 #17 NEW cov: 12118 ft: 13114 corp: 5/151b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 ChangeByte- 00:09:13.341 [2024-07-15 12:26:08.422330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.422361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.422410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.422427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.422458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.422473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.341 [2024-07-15 12:26:08.422504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00b30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.341 [2024-07-15 12:26:08.422520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.599 #18 NEW cov: 12118 ft: 13246 corp: 6/189b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 ChangeBit- 00:09:13.599 [2024-07-15 12:26:08.502616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.599 [2024-07-15 12:26:08.502648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.599 [2024-07-15 12:26:08.502684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.599 [2024-07-15 12:26:08.502701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.599 [2024-07-15 12:26:08.502733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.599 [2024-07-15 12:26:08.502749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.599 [2024-07-15 12:26:08.502780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.599 [2024-07-15 12:26:08.502795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.599 #19 NEW cov: 12118 ft: 13295 corp: 7/225b lim: 40 exec/s: 0 rss: 72Mb L: 36/38 MS: 1 ChangeBit- 00:09:13.599 [2024-07-15 12:26:08.562713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.599 [2024-07-15 12:26:08.562745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.599 [2024-07-15 12:26:08.562796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.599 [2024-07-15 12:26:08.562812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.599 [2024-07-15 12:26:08.562843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.562860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.562891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.562907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.600 #20 NEW cov: 12118 ft: 13378 corp: 8/261b lim: 40 exec/s: 0 rss: 72Mb L: 36/38 MS: 1 ChangeBinInt- 00:09:13.600 [2024-07-15 12:26:08.642978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.643011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.643062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.643078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.643110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:0000b700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.643126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.643156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.643176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.600 #21 NEW cov: 12118 ft: 13402 corp: 9/297b lim: 40 exec/s: 0 rss: 72Mb L: 36/38 MS: 1 ChangeByte- 00:09:13.600 [2024-07-15 12:26:08.723208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:012789ee SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.723241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.723277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:55c2eaac cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.723293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.723325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.723341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.600 [2024-07-15 12:26:08.723372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00b30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.600 [2024-07-15 12:26:08.723388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.857 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:13.857 #22 NEW cov: 12141 ft: 13452 corp: 10/335b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 PersAutoDict- DE: "\001'\211\356U\302\352\254"- 00:09:13.857 [2024-07-15 12:26:08.803501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-07-15 12:26:08.803544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.857 [2024-07-15 12:26:08.803581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-07-15 12:26:08.803598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.857 [2024-07-15 12:26:08.803630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:2789ee55 cdw11:c2eaac00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.857 [2024-07-15 12:26:08.803646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.803678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.803694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.858 #23 NEW cov: 12141 ft: 13531 corp: 11/373b lim: 40 exec/s: 23 rss: 72Mb L: 38/38 MS: 1 ChangeBit- 00:09:13.858 [2024-07-15 12:26:08.883695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.883728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.883778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.883794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.883830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.883845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.883876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.883891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.858 #24 NEW cov: 12141 ft: 13546 corp: 12/409b lim: 40 exec/s: 24 rss: 72Mb L: 36/38 MS: 1 CrossOver- 00:09:13.858 [2024-07-15 12:26:08.933764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.933797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.933847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.933863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.933894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.933909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.858 [2024-07-15 12:26:08.933940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.858 [2024-07-15 12:26:08.933955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.116 #25 NEW cov: 12141 ft: 13583 corp: 13/445b lim: 40 exec/s: 25 rss: 73Mb L: 36/38 MS: 1 ChangeBinInt- 00:09:14.116 [2024-07-15 12:26:09.013949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.013979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.014029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000015 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.014045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.014076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:5d06baee cdw11:89270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.014091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.014122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.014137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.116 #26 NEW cov: 12141 ft: 13626 corp: 14/481b lim: 40 exec/s: 26 rss: 73Mb L: 36/38 MS: 1 CMP- DE: "\025]\006\272\356\211'\000"- 00:09:14.116 [2024-07-15 12:26:09.064084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.064113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.064152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000015 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.064168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.064198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:5d06baee cdw11:89270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.064214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.064244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.064259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.116 #27 NEW cov: 12141 ft: 13751 corp: 15/520b lim: 40 exec/s: 27 rss: 73Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:09:14.116 [2024-07-15 12:26:09.144303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.144334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.144369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.144385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.144416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000023 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.144431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.116 [2024-07-15 12:26:09.144461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000b300 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.144476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.116 #28 NEW cov: 12141 ft: 13788 corp: 16/559b lim: 40 exec/s: 28 rss: 73Mb L: 39/39 MS: 1 InsertByte- 00:09:14.116 [2024-07-15 12:26:09.194259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:5d5d5d5d cdw11:5d5d5d5d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.116 [2024-07-15 12:26:09.194289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.116 #29 NEW cov: 12141 ft: 14618 corp: 17/568b lim: 40 exec/s: 29 rss: 73Mb L: 9/39 MS: 1 InsertRepeatedBytes- 00:09:14.374 [2024-07-15 12:26:09.264628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.374 [2024-07-15 12:26:09.264659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.374 [2024-07-15 12:26:09.264693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.374 [2024-07-15 12:26:09.264709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.374 [2024-07-15 12:26:09.264740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00004000 cdw11:0000b700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.374 [2024-07-15 12:26:09.264755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.374 [2024-07-15 12:26:09.264789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.374 [2024-07-15 12:26:09.264805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.374 #30 NEW cov: 12141 ft: 14621 corp: 18/604b lim: 40 exec/s: 30 rss: 73Mb L: 36/39 MS: 1 ChangeBinInt- 00:09:14.375 [2024-07-15 12:26:09.344854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d0fe3131 cdw11:31313131 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.344884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.344919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:31313131 cdw11:31313131 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.344935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.344966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:31313131 cdw11:31313131 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.344981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.345011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:31313131 cdw11:31313131 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.345027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.375 #34 NEW cov: 12141 ft: 14641 corp: 19/638b lim: 40 exec/s: 34 rss: 73Mb L: 34/39 MS: 4 CrossOver-InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:09:14.375 [2024-07-15 12:26:09.395009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.395038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.395087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.395104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.395134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.395149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.395179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000b3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.395195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.395226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000001a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.395241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:14.375 #35 NEW cov: 12141 ft: 14726 corp: 20/678b lim: 40 exec/s: 35 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:09:14.375 [2024-07-15 12:26:09.444946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.444979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.375 [2024-07-15 12:26:09.445014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.375 [2024-07-15 12:26:09.445031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.375 #36 NEW cov: 12141 ft: 14969 corp: 21/696b lim: 40 exec/s: 36 rss: 73Mb L: 18/40 MS: 1 EraseBytes- 00:09:14.633 [2024-07-15 12:26:09.505264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.505296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.505332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000015 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.505349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.505380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:5d06baee cdw11:89270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.505396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.505427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.505442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.633 #37 NEW cov: 12141 ft: 15036 corp: 22/735b lim: 40 exec/s: 37 rss: 73Mb L: 39/40 MS: 1 ChangeBinInt- 00:09:14.633 [2024-07-15 12:26:09.585441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.585473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.585522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.585545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.585576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.585592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.585622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00b30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.585638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.633 #38 NEW cov: 12141 ft: 15084 corp: 23/773b lim: 40 exec/s: 38 rss: 73Mb L: 38/40 MS: 1 CopyPart- 00:09:14.633 [2024-07-15 12:26:09.645513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff3f0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.645555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.645591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.645611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.633 #41 NEW cov: 12141 ft: 15085 corp: 24/792b lim: 40 exec/s: 41 rss: 73Mb L: 19/40 MS: 3 ChangeByte-InsertByte-CrossOver- 00:09:14.633 [2024-07-15 12:26:09.705673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.705704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.705753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.705769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.633 [2024-07-15 12:26:09.705800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.633 [2024-07-15 12:26:09.705815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.893 #42 NEW cov: 12141 ft: 15327 corp: 25/821b lim: 40 exec/s: 42 rss: 73Mb L: 29/40 MS: 1 InsertRepeatedBytes- 00:09:14.893 [2024-07-15 12:26:09.785976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.786007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.893 [2024-07-15 12:26:09.786056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.786072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.893 [2024-07-15 12:26:09.786103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:002e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.786118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.893 [2024-07-15 12:26:09.786149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.786164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.893 #43 NEW cov: 12141 ft: 15338 corp: 26/860b lim: 40 exec/s: 43 rss: 73Mb L: 39/40 MS: 1 InsertByte- 00:09:14.893 [2024-07-15 12:26:09.836047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ff3f0000 cdw11:005f5f5f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.836079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.893 [2024-07-15 12:26:09.836114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.836130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.893 [2024-07-15 12:26:09.836161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.893 [2024-07-15 12:26:09.836176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.893 #44 NEW cov: 12141 ft: 15385 corp: 27/888b lim: 40 exec/s: 22 rss: 73Mb L: 28/40 MS: 1 InsertRepeatedBytes- 00:09:14.893 #44 DONE cov: 12141 ft: 15385 corp: 27/888b lim: 40 exec/s: 22 rss: 73Mb 00:09:14.893 ###### Recommended dictionary. ###### 00:09:14.893 "\001'\211\356U\302\352\254" # Uses: 1 00:09:14.893 "\025]\006\272\356\211'\000" # Uses: 0 00:09:14.893 ###### End of recommended dictionary. ###### 00:09:14.893 Done 44 runs in 2 second(s) 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:15.152 12:26:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:09:15.152 [2024-07-15 12:26:10.078463] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:15.152 [2024-07-15 12:26:10.078543] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162373 ] 00:09:15.152 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.152 [2024-07-15 12:26:10.262746] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.410 [2024-07-15 12:26:10.336710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.410 [2024-07-15 12:26:10.396252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.410 [2024-07-15 12:26:10.412462] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:09:15.410 INFO: Running with entropic power schedule (0xFF, 100). 00:09:15.410 INFO: Seed: 3457239967 00:09:15.410 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:15.410 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:15.410 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:15.410 INFO: A corpus is not provided, starting from an empty corpus 00:09:15.410 #2 INITED exec/s: 0 rss: 65Mb 00:09:15.410 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:15.410 This may also happen if the target rejected all inputs we tried so far 00:09:15.410 [2024-07-15 12:26:10.458098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.410 [2024-07-15 12:26:10.458131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.410 [2024-07-15 12:26:10.458201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.410 [2024-07-15 12:26:10.458215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.410 [2024-07-15 12:26:10.458270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.410 [2024-07-15 12:26:10.458283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.668 NEW_FUNC[1/696]: 0x4947d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:09:15.668 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:15.668 #3 NEW cov: 11893 ft: 11895 corp: 2/26b lim: 40 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:09:15.926 [2024-07-15 12:26:10.798855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.798896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.798953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.798968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.926 #6 NEW cov: 12025 ft: 12831 corp: 3/42b lim: 40 exec/s: 0 rss: 72Mb L: 16/25 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:09:15.926 [2024-07-15 12:26:10.839042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.839068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.839144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44445944 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.839159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.839215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.839229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.926 #7 NEW cov: 12031 ft: 13056 corp: 4/67b lim: 40 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:09:15.926 [2024-07-15 12:26:10.889389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.889419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.889480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.889495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.889556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.889573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.889629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.889642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.926 #8 NEW cov: 12116 ft: 13558 corp: 5/104b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:09:15.926 [2024-07-15 12:26:10.939359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b26a10c cdw11:c1ef8927 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.939384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.939442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.939456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.939510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.939524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.926 #9 NEW cov: 12116 ft: 13649 corp: 6/128b lim: 40 exec/s: 0 rss: 72Mb L: 24/37 MS: 1 CMP- DE: "&\241\014\301\357\211'\000"- 00:09:15.926 [2024-07-15 12:26:10.979295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b44443a cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.979322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:10.979383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:10.979397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.926 #10 NEW cov: 12116 ft: 13782 corp: 7/144b lim: 40 exec/s: 0 rss: 72Mb L: 16/37 MS: 1 ChangeByte- 00:09:15.926 [2024-07-15 12:26:11.019593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:26a10cc1 cdw11:ef892700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:11.019618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:11.019692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44445944 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:11.019707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.926 [2024-07-15 12:26:11.019761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.926 [2024-07-15 12:26:11.019775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.184 #16 NEW cov: 12116 ft: 13884 corp: 8/169b lim: 40 exec/s: 0 rss: 72Mb L: 25/37 MS: 1 PersAutoDict- DE: "&\241\014\301\357\211'\000"- 00:09:16.184 [2024-07-15 12:26:11.069686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a464444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.069711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.069771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44445944 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.069785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.069853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.069867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.184 #17 NEW cov: 12116 ft: 14022 corp: 9/194b lim: 40 exec/s: 0 rss: 72Mb L: 25/37 MS: 1 ChangeBit- 00:09:16.184 [2024-07-15 12:26:11.109799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.109825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.109883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44544444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.109896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.109953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.109966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.184 #18 NEW cov: 12116 ft: 14041 corp: 10/219b lim: 40 exec/s: 0 rss: 72Mb L: 25/37 MS: 1 ChangeBit- 00:09:16.184 [2024-07-15 12:26:11.149938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.149964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.150026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44544449 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.150040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.150096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.150109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.184 #19 NEW cov: 12116 ft: 14083 corp: 11/244b lim: 40 exec/s: 0 rss: 72Mb L: 25/37 MS: 1 ChangeBinInt- 00:09:16.184 [2024-07-15 12:26:11.199938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444454 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.199963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.200040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44494444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.200055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.184 #20 NEW cov: 12116 ft: 14144 corp: 12/267b lim: 40 exec/s: 0 rss: 72Mb L: 23/37 MS: 1 EraseBytes- 00:09:16.184 [2024-07-15 12:26:11.250448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:26a10cc1 cdw11:ef892700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.250476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.250535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.250564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.184 [2024-07-15 12:26:11.250622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.250635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.184 #21 NEW cov: 12116 ft: 14204 corp: 13/292b lim: 40 exec/s: 0 rss: 72Mb L: 25/37 MS: 1 ShuffleBytes- 00:09:16.184 [2024-07-15 12:26:11.300077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:08010000 cdw11:00270827 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.184 [2024-07-15 12:26:11.300101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.441 #26 NEW cov: 12116 ft: 14965 corp: 14/300b lim: 40 exec/s: 0 rss: 72Mb L: 8/37 MS: 5 CopyPart-ChangeBit-InsertByte-CopyPart-CMP- DE: "\001\000\000\000"- 00:09:16.441 [2024-07-15 12:26:11.340444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.340468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.441 [2024-07-15 12:26:11.340525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:54444944 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.340543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.441 [2024-07-15 12:26:11.340617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.340630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.441 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:16.441 #27 NEW cov: 12139 ft: 15065 corp: 15/324b lim: 40 exec/s: 0 rss: 73Mb L: 24/37 MS: 1 EraseBytes- 00:09:16.441 [2024-07-15 12:26:11.380422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.380448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.441 [2024-07-15 12:26:11.380522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.380546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.441 #28 NEW cov: 12139 ft: 15113 corp: 16/340b lim: 40 exec/s: 0 rss: 73Mb L: 16/37 MS: 1 CrossOver- 00:09:16.441 [2024-07-15 12:26:11.420352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:08010038 cdw11:00002708 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.420376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.441 #29 NEW cov: 12139 ft: 15158 corp: 17/349b lim: 40 exec/s: 29 rss: 73Mb L: 9/37 MS: 1 InsertByte- 00:09:16.441 [2024-07-15 12:26:11.470874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.470902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.441 [2024-07-15 12:26:11.470977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.470991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.441 [2024-07-15 12:26:11.471046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44454444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.441 [2024-07-15 12:26:11.471059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.441 #30 NEW cov: 12139 ft: 15190 corp: 18/374b lim: 40 exec/s: 30 rss: 73Mb L: 25/37 MS: 1 ChangeBinInt- 00:09:16.442 [2024-07-15 12:26:11.510802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.442 [2024-07-15 12:26:11.510827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.442 [2024-07-15 12:26:11.510902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.442 [2024-07-15 12:26:11.510916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.442 #31 NEW cov: 12139 ft: 15202 corp: 19/390b lim: 40 exec/s: 31 rss: 73Mb L: 16/37 MS: 1 ShuffleBytes- 00:09:16.442 [2024-07-15 12:26:11.551061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.442 [2024-07-15 12:26:11.551085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.442 [2024-07-15 12:26:11.551158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.442 [2024-07-15 12:26:11.551172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.442 [2024-07-15 12:26:11.551229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.442 [2024-07-15 12:26:11.551242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.699 #32 NEW cov: 12139 ft: 15229 corp: 20/415b lim: 40 exec/s: 32 rss: 73Mb L: 25/37 MS: 1 ShuffleBytes- 00:09:16.699 [2024-07-15 12:26:11.591043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.591067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.699 [2024-07-15 12:26:11.591126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.591140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.699 #38 NEW cov: 12139 ft: 15255 corp: 21/436b lim: 40 exec/s: 38 rss: 73Mb L: 21/37 MS: 1 EraseBytes- 00:09:16.699 [2024-07-15 12:26:11.641306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:26a10cc1 cdw11:ef77d8ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.641330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.699 [2024-07-15 12:26:11.641391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bb444444 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.641405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.699 [2024-07-15 12:26:11.641462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.641475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.699 #39 NEW cov: 12139 ft: 15267 corp: 22/461b lim: 40 exec/s: 39 rss: 73Mb L: 25/37 MS: 1 ChangeBinInt- 00:09:16.699 [2024-07-15 12:26:11.691287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a464444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.691311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.699 [2024-07-15 12:26:11.691371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444459 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.691385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.699 #40 NEW cov: 12139 ft: 15278 corp: 23/483b lim: 40 exec/s: 40 rss: 73Mb L: 22/37 MS: 1 EraseBytes- 00:09:16.699 [2024-07-15 12:26:11.741606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.699 [2024-07-15 12:26:11.741630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.741708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44445944 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.741722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.741776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444401 cdw11:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.741790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.700 #41 NEW cov: 12139 ft: 15285 corp: 24/508b lim: 40 exec/s: 41 rss: 73Mb L: 25/37 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:09:16.700 [2024-07-15 12:26:11.781880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.781906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.781966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.781981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.782037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.782052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.782108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.782122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.700 #44 NEW cov: 12139 ft: 15293 corp: 25/546b lim: 40 exec/s: 44 rss: 73Mb L: 38/38 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:09:16.700 [2024-07-15 12:26:11.822184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.822208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.822283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:91919191 cdw11:55919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.822297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.822355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.822368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.822421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.822435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.700 [2024-07-15 12:26:11.822493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:91919191 cdw11:91910a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.700 [2024-07-15 12:26:11.822506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.957 #45 NEW cov: 12139 ft: 15344 corp: 26/586b lim: 40 exec/s: 45 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:09:16.957 [2024-07-15 12:26:11.872318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:55919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.872343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.872401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:91919191 cdw11:55919111 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.872414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.872486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.872499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.872557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.872571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.872628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:91919191 cdw11:91910a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.872641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.957 #46 NEW cov: 12139 ft: 15356 corp: 27/626b lim: 40 exec/s: 46 rss: 73Mb L: 40/40 MS: 1 ChangeBit- 00:09:16.957 [2024-07-15 12:26:11.922136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.922160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.922223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.922236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.922291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44644444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.922304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.957 #47 NEW cov: 12139 ft: 15372 corp: 28/651b lim: 40 exec/s: 47 rss: 73Mb L: 25/40 MS: 1 ChangeBit- 00:09:16.957 [2024-07-15 12:26:11.962419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.962444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.962517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.962539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.962596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.962611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:11.962668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:11.962682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.957 #48 NEW cov: 12139 ft: 15401 corp: 29/688b lim: 40 exec/s: 48 rss: 73Mb L: 37/40 MS: 1 ShuffleBytes- 00:09:16.957 [2024-07-15 12:26:12.012378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a464444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:12.012404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:12.012481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:12.012495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:12.012554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:12.012568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.957 #49 NEW cov: 12139 ft: 15409 corp: 30/713b lim: 40 exec/s: 49 rss: 73Mb L: 25/40 MS: 1 CrossOver- 00:09:16.957 [2024-07-15 12:26:12.052312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b445959 cdw11:59444459 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:12.052337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.957 [2024-07-15 12:26:12.052394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.957 [2024-07-15 12:26:12.052408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.215 #50 NEW cov: 12139 ft: 15420 corp: 31/729b lim: 40 exec/s: 50 rss: 73Mb L: 16/40 MS: 1 ShuffleBytes- 00:09:17.215 [2024-07-15 12:26:12.102635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0a44 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.102659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.102736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.102751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.102809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444445 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.102823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.215 #53 NEW cov: 12139 ft: 15460 corp: 32/756b lim: 40 exec/s: 53 rss: 73Mb L: 27/40 MS: 3 ShuffleBytes-CopyPart-CrossOver- 00:09:17.215 [2024-07-15 12:26:12.142747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:26a10cc1 cdw11:ef77d8ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.142771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.142846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bb444444 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.142860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.142917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44446444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.142930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.215 #54 NEW cov: 12139 ft: 15474 corp: 33/781b lim: 40 exec/s: 54 rss: 73Mb L: 25/40 MS: 1 ChangeBit- 00:09:17.215 [2024-07-15 12:26:12.192870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:26a10cc1 cdw11:ef77d8ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.192894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.192954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bb444444 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.192968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.193025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44446444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.193039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.215 #55 NEW cov: 12139 ft: 15491 corp: 34/806b lim: 40 exec/s: 55 rss: 74Mb L: 25/40 MS: 1 ChangeByte- 00:09:17.215 [2024-07-15 12:26:12.243052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a454444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.243078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.243134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44544444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.243152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.215 [2024-07-15 12:26:12.243207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.215 [2024-07-15 12:26:12.243221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.215 #56 NEW cov: 12139 ft: 15512 corp: 35/831b lim: 40 exec/s: 56 rss: 74Mb L: 25/40 MS: 1 ChangeBit- 00:09:17.216 [2024-07-15 12:26:12.282980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.216 [2024-07-15 12:26:12.283006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.216 [2024-07-15 12:26:12.283083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:594444f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.216 [2024-07-15 12:26:12.283097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.216 #57 NEW cov: 12139 ft: 15526 corp: 36/848b lim: 40 exec/s: 57 rss: 74Mb L: 17/40 MS: 1 InsertByte- 00:09:17.216 [2024-07-15 12:26:12.323242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:26a10cc1 cdw11:ef77d8ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.216 [2024-07-15 12:26:12.323267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.216 [2024-07-15 12:26:12.323342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44bb4444 cdw11:44594444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.216 [2024-07-15 12:26:12.323357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.216 [2024-07-15 12:26:12.323414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444464 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.216 [2024-07-15 12:26:12.323427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.476 #58 NEW cov: 12139 ft: 15528 corp: 37/874b lim: 40 exec/s: 58 rss: 74Mb L: 26/40 MS: 1 CrossOver- 00:09:17.476 [2024-07-15 12:26:12.373540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:5b444444 cdw11:59595959 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.373566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.476 [2024-07-15 12:26:12.373643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:59595959 cdw11:59444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.373658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.476 [2024-07-15 12:26:12.373725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:babababa cdw11:babababa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.373739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.476 [2024-07-15 12:26:12.373793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:babababa cdw11:ba0025ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.373806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.476 #59 NEW cov: 12139 ft: 15535 corp: 38/911b lim: 40 exec/s: 59 rss: 74Mb L: 37/40 MS: 1 ChangeBinInt- 00:09:17.476 [2024-07-15 12:26:12.423553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a5b4444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.423579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.476 [2024-07-15 12:26:12.423654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.423696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.476 [2024-07-15 12:26:12.423751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:44444444 cdw11:44444444 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:17.476 [2024-07-15 12:26:12.423765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.476 #60 NEW cov: 12139 ft: 15551 corp: 39/936b lim: 40 exec/s: 30 rss: 74Mb L: 25/40 MS: 1 ChangeByte- 00:09:17.476 #60 DONE cov: 12139 ft: 15551 corp: 39/936b lim: 40 exec/s: 30 rss: 74Mb 00:09:17.476 ###### Recommended dictionary. ###### 00:09:17.476 "&\241\014\301\357\211'\000" # Uses: 1 00:09:17.476 "\001\000\000\000" # Uses: 1 00:09:17.476 ###### End of recommended dictionary. ###### 00:09:17.476 Done 60 runs in 2 second(s) 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:17.476 12:26:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:09:17.476 [2024-07-15 12:26:12.596537] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:17.476 [2024-07-15 12:26:12.596608] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162726 ] 00:09:17.733 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.733 [2024-07-15 12:26:12.788743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.733 [2024-07-15 12:26:12.861602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.991 [2024-07-15 12:26:12.921304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.991 [2024-07-15 12:26:12.937509] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:09:17.991 INFO: Running with entropic power schedule (0xFF, 100). 00:09:17.991 INFO: Seed: 1688249351 00:09:17.991 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:17.991 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:17.991 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:17.991 INFO: A corpus is not provided, starting from an empty corpus 00:09:17.991 #2 INITED exec/s: 0 rss: 65Mb 00:09:17.991 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:17.991 This may also happen if the target rejected all inputs we tried so far 00:09:17.991 [2024-07-15 12:26:12.982467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.991 [2024-07-15 12:26:12.982503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.991 [2024-07-15 12:26:12.982547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.991 [2024-07-15 12:26:12.982564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.991 [2024-07-15 12:26:12.982595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.991 [2024-07-15 12:26:12.982611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.991 [2024-07-15 12:26:12.982641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.991 [2024-07-15 12:26:12.982657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.288 NEW_FUNC[1/695]: 0x496390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:09:18.289 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:18.289 #23 NEW cov: 11883 ft: 11881 corp: 2/35b lim: 40 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:09:18.289 [2024-07-15 12:26:13.333339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.289 [2024-07-15 12:26:13.333390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.289 [2024-07-15 12:26:13.333427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.289 [2024-07-15 12:26:13.333444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.289 [2024-07-15 12:26:13.333475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.289 [2024-07-15 12:26:13.333491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.289 [2024-07-15 12:26:13.333521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.289 [2024-07-15 12:26:13.333550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.289 #24 NEW cov: 12013 ft: 12397 corp: 3/69b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:09:18.546 [2024-07-15 12:26:13.413326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f0a cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.413362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.413412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.413430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.546 #25 NEW cov: 12019 ft: 13165 corp: 4/89b lim: 40 exec/s: 0 rss: 72Mb L: 20/34 MS: 1 CrossOver- 00:09:18.546 [2024-07-15 12:26:13.493616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.493647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.493682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.493698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.493729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.493744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.493774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.493790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.546 #26 NEW cov: 12104 ft: 13428 corp: 5/123b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:09:18.546 [2024-07-15 12:26:13.553788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.553818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.553852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.553868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.553898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.553914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.553943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.553959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.546 #27 NEW cov: 12104 ft: 13546 corp: 6/157b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:09:18.546 [2024-07-15 12:26:13.603842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f0a cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.546 [2024-07-15 12:26:13.603874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.546 [2024-07-15 12:26:13.603909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.547 [2024-07-15 12:26:13.603927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.547 #28 NEW cov: 12104 ft: 13674 corp: 7/178b lim: 40 exec/s: 0 rss: 72Mb L: 21/34 MS: 1 InsertByte- 00:09:18.805 [2024-07-15 12:26:13.683949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f0a cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.683980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.684030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.684046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.805 #29 NEW cov: 12104 ft: 13839 corp: 8/197b lim: 40 exec/s: 0 rss: 72Mb L: 19/34 MS: 1 EraseBytes- 00:09:18.805 [2024-07-15 12:26:13.744195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.744226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.744275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.744291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.744321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.744336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.744366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.744382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.805 #30 NEW cov: 12104 ft: 13873 corp: 9/231b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ShuffleBytes- 00:09:18.805 [2024-07-15 12:26:13.794312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f0a cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.794343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.794378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.794394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.794424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.794443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.805 #31 NEW cov: 12104 ft: 14162 corp: 10/262b lim: 40 exec/s: 0 rss: 72Mb L: 31/34 MS: 1 InsertRepeatedBytes- 00:09:18.805 [2024-07-15 12:26:13.854521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.854560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.854596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.854612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.854642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.854658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.854687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3fc1 cdw11:bc3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.854703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.805 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:18.805 #32 NEW cov: 12121 ft: 14257 corp: 11/296b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBinInt- 00:09:18.805 [2024-07-15 12:26:13.904620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.904650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.904682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.904698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.904726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3fc5 cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.904741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.805 [2024-07-15 12:26:13.904769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.805 [2024-07-15 12:26:13.904784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.064 #33 NEW cov: 12121 ft: 14300 corp: 12/330b lim: 40 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 ChangeBinInt- 00:09:19.064 [2024-07-15 12:26:13.984922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:13.984954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:13.984989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:13.985006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:13.985040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f01003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:13.985056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:13.985087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:13.985103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.064 #34 NEW cov: 12121 ft: 14322 corp: 13/366b lim: 40 exec/s: 34 rss: 72Mb L: 36/36 MS: 1 CMP- DE: "\001\000"- 00:09:19.064 [2024-07-15 12:26:14.034951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.034981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:14.035015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.035030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:14.035059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3fc5 cdw11:3f3f7f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.035074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:14.035103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.035117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.064 #35 NEW cov: 12121 ft: 14330 corp: 14/400b lim: 40 exec/s: 35 rss: 72Mb L: 34/36 MS: 1 ChangeBit- 00:09:19.064 [2024-07-15 12:26:14.115207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.115239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:14.115273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3fc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.115289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:14.115319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c03f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.115335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.064 [2024-07-15 12:26:14.115365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.064 [2024-07-15 12:26:14.115380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.064 #36 NEW cov: 12121 ft: 14372 corp: 15/434b lim: 40 exec/s: 36 rss: 73Mb L: 34/36 MS: 1 ChangeBinInt- 00:09:19.322 [2024-07-15 12:26:14.195465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.322 [2024-07-15 12:26:14.195496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.322 [2024-07-15 12:26:14.195542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3fc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.322 [2024-07-15 12:26:14.195559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.322 [2024-07-15 12:26:14.195590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c03f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.322 [2024-07-15 12:26:14.195606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.195637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.195653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.323 #37 NEW cov: 12121 ft: 14390 corp: 16/468b lim: 40 exec/s: 37 rss: 73Mb L: 34/36 MS: 1 ShuffleBytes- 00:09:19.323 [2024-07-15 12:26:14.275676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.275707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.275742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f6d3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.275758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.275789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.275805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.275835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.275851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.323 #38 NEW cov: 12121 ft: 14409 corp: 17/502b lim: 40 exec/s: 38 rss: 73Mb L: 34/36 MS: 1 ChangeByte- 00:09:19.323 [2024-07-15 12:26:14.335682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.335712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.335746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3fc1 cdw11:bc3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.335762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.323 #39 NEW cov: 12121 ft: 14418 corp: 18/520b lim: 40 exec/s: 39 rss: 73Mb L: 18/36 MS: 1 EraseBytes- 00:09:19.323 [2024-07-15 12:26:14.415943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:dcdcdcdc cdw11:dcdcdcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.415973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.416008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:dcdcdcdc cdw11:dcdcdcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.416031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.323 [2024-07-15 12:26:14.416062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:dcdcdcdc cdw11:dcdcdcdc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.323 [2024-07-15 12:26:14.416078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.582 #40 NEW cov: 12121 ft: 14437 corp: 19/548b lim: 40 exec/s: 40 rss: 73Mb L: 28/36 MS: 1 InsertRepeatedBytes- 00:09:19.582 [2024-07-15 12:26:14.476201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.476231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.476265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3fc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.476282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.476311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c03f3fff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.476327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.476356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.476372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.476401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.476416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:19.582 #41 NEW cov: 12121 ft: 14488 corp: 20/588b lim: 40 exec/s: 41 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:19.582 [2024-07-15 12:26:14.526143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.526172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.526204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.526219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.582 #42 NEW cov: 12121 ft: 14532 corp: 21/607b lim: 40 exec/s: 42 rss: 73Mb L: 19/40 MS: 1 InsertRepeatedBytes- 00:09:19.582 [2024-07-15 12:26:14.576409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.576438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.576470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.576485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.576513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:f4f4f4f4 cdw11:f4f43f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.576537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.576583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.576598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.576628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.576643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:19.582 #43 NEW cov: 12121 ft: 14543 corp: 22/647b lim: 40 exec/s: 43 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:19.582 [2024-07-15 12:26:14.626548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.626577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.626611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.626628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.626657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:293f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.626673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.582 [2024-07-15 12:26:14.626703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.626718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.582 #44 NEW cov: 12121 ft: 14546 corp: 23/682b lim: 40 exec/s: 44 rss: 73Mb L: 35/40 MS: 1 InsertByte- 00:09:19.582 [2024-07-15 12:26:14.676507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.582 [2024-07-15 12:26:14.676544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.841 #46 NEW cov: 12121 ft: 14896 corp: 24/696b lim: 40 exec/s: 46 rss: 73Mb L: 14/40 MS: 2 InsertByte-CrossOver- 00:09:19.841 [2024-07-15 12:26:14.736724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f0a cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.736752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.736784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f373f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.736800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.841 #47 NEW cov: 12121 ft: 14908 corp: 25/715b lim: 40 exec/s: 47 rss: 73Mb L: 19/40 MS: 1 ChangeBit- 00:09:19.841 [2024-07-15 12:26:14.817041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.817069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.817105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3fc1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.817120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.817148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:c03f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.817162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.817190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3fc1c03f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.817205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:19.841 #48 NEW cov: 12121 ft: 14939 corp: 26/749b lim: 40 exec/s: 48 rss: 73Mb L: 34/40 MS: 1 CopyPart- 00:09:19.841 [2024-07-15 12:26:14.867086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3b3f3f0a cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.867115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.867149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f373f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.867165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.841 #49 NEW cov: 12128 ft: 14974 corp: 27/768b lim: 40 exec/s: 49 rss: 73Mb L: 19/40 MS: 1 ChangeBinInt- 00:09:19.841 [2024-07-15 12:26:14.947387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.947420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.947454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3f3f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.947471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.947500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3f3f3f3f cdw11:3f01003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.947516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.841 [2024-07-15 12:26:14.947558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:3f3f3f3f cdw11:3f3f3fbf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.841 [2024-07-15 12:26:14.947575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.100 #50 NEW cov: 12128 ft: 14982 corp: 28/804b lim: 40 exec/s: 25 rss: 73Mb L: 36/40 MS: 1 ChangeBinInt- 00:09:20.100 #50 DONE cov: 12128 ft: 14982 corp: 28/804b lim: 40 exec/s: 25 rss: 73Mb 00:09:20.100 ###### Recommended dictionary. ###### 00:09:20.100 "\001\000" # Uses: 0 00:09:20.100 ###### End of recommended dictionary. ###### 00:09:20.100 Done 50 runs in 2 second(s) 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:20.100 12:26:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:09:20.100 [2024-07-15 12:26:15.187766] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:20.100 [2024-07-15 12:26:15.187845] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163051 ] 00:09:20.100 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.358 [2024-07-15 12:26:15.386409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.358 [2024-07-15 12:26:15.458669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.616 [2024-07-15 12:26:15.518299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.616 [2024-07-15 12:26:15.534505] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:09:20.616 INFO: Running with entropic power schedule (0xFF, 100). 00:09:20.616 INFO: Seed: 4284270405 00:09:20.616 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:20.616 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:20.616 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:20.616 INFO: A corpus is not provided, starting from an empty corpus 00:09:20.616 #2 INITED exec/s: 0 rss: 65Mb 00:09:20.616 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:20.616 This may also happen if the target rejected all inputs we tried so far 00:09:20.616 [2024-07-15 12:26:15.589862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.616 [2024-07-15 12:26:15.589892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.875 NEW_FUNC[1/696]: 0x497f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:09:20.875 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:20.875 #15 NEW cov: 11865 ft: 11878 corp: 2/14b lim: 35 exec/s: 0 rss: 72Mb L: 13/13 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:09:20.875 [2024-07-15 12:26:15.911067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.875 [2024-07-15 12:26:15.911135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.875 [2024-07-15 12:26:15.911219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.875 [2024-07-15 12:26:15.911248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.875 #16 NEW cov: 12007 ft: 13220 corp: 3/28b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 InsertByte- 00:09:20.875 [2024-07-15 12:26:15.971080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.875 [2024-07-15 12:26:15.971107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.875 [2024-07-15 12:26:15.971163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.875 [2024-07-15 12:26:15.971178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.875 [2024-07-15 12:26:15.971235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.875 [2024-07-15 12:26:15.971250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.134 #17 NEW cov: 12013 ft: 13602 corp: 4/50b lim: 35 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 CMP- DE: "5\231\017\266\362\211'\000"- 00:09:21.134 [2024-07-15 12:26:16.020914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.020940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.134 #18 NEW cov: 12098 ft: 13946 corp: 5/63b lim: 35 exec/s: 0 rss: 73Mb L: 13/22 MS: 1 ShuffleBytes- 00:09:21.134 [2024-07-15 12:26:16.061050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.061076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.134 #19 NEW cov: 12098 ft: 14105 corp: 6/76b lim: 35 exec/s: 0 rss: 73Mb L: 13/22 MS: 1 ChangeBit- 00:09:21.134 [2024-07-15 12:26:16.101446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.101475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.101535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.101553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.101612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.101629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.134 #20 NEW cov: 12098 ft: 14227 corp: 7/98b lim: 35 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 ChangeByte- 00:09:21.134 [2024-07-15 12:26:16.151764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.151794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.151851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.151867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.151924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.151939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.151995] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.152010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.134 #21 NEW cov: 12098 ft: 14586 corp: 8/130b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:09:21.134 [2024-07-15 12:26:16.192022] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.192050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.192114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.192135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.192197] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.192215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.192274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.192293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.192351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.192368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:21.134 #22 NEW cov: 12098 ft: 14667 corp: 9/165b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:09:21.134 [2024-07-15 12:26:16.231772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.231798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.231856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.231872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.134 [2024-07-15 12:26:16.231928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.134 [2024-07-15 12:26:16.231944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.134 #23 NEW cov: 12098 ft: 14686 corp: 10/187b lim: 35 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 ChangeBit- 00:09:21.393 [2024-07-15 12:26:16.271794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.271824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.271887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.271905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.393 #29 NEW cov: 12098 ft: 14740 corp: 11/207b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 1 InsertRepeatedBytes- 00:09:21.393 [2024-07-15 12:26:16.312218] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.312245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.312301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.312315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.312371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.312385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.393 NEW_FUNC[1/2]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:21.393 NEW_FUNC[2/2]: 0x11f0900 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:09:21.393 #31 NEW cov: 12138 ft: 14844 corp: 12/241b lim: 35 exec/s: 0 rss: 73Mb L: 34/35 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:21.393 [2024-07-15 12:26:16.362366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.362392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.362450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.362467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.362522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.362543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.362600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.362616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.393 #33 NEW cov: 12138 ft: 14906 corp: 13/272b lim: 35 exec/s: 0 rss: 73Mb L: 31/35 MS: 2 ChangeBit-CrossOver- 00:09:21.393 [2024-07-15 12:26:16.401978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.402005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.393 #34 NEW cov: 12138 ft: 14939 corp: 14/285b lim: 35 exec/s: 0 rss: 73Mb L: 13/35 MS: 1 ChangeBinInt- 00:09:21.393 [2024-07-15 12:26:16.452288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.452319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.452385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.452406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.393 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:21.393 #35 NEW cov: 12161 ft: 15006 corp: 15/299b lim: 35 exec/s: 0 rss: 73Mb L: 14/35 MS: 1 ChangeByte- 00:09:21.393 [2024-07-15 12:26:16.492375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.492402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.393 [2024-07-15 12:26:16.492464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.393 [2024-07-15 12:26:16.492480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.651 #36 NEW cov: 12161 ft: 15029 corp: 16/315b lim: 35 exec/s: 0 rss: 73Mb L: 16/35 MS: 1 InsertRepeatedBytes- 00:09:21.651 [2024-07-15 12:26:16.542807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.542835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.651 [2024-07-15 12:26:16.542893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.542907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.651 [2024-07-15 12:26:16.542965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.542979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.651 [2024-07-15 12:26:16.543035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.543050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.651 #37 NEW cov: 12161 ft: 15064 corp: 17/345b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:09:21.651 [2024-07-15 12:26:16.582813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.582840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.651 [2024-07-15 12:26:16.582899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.582914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.651 [2024-07-15 12:26:16.582971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.582986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.651 #38 NEW cov: 12161 ft: 15083 corp: 18/367b lim: 35 exec/s: 38 rss: 73Mb L: 22/35 MS: 1 ChangeByte- 00:09:21.651 [2024-07-15 12:26:16.622737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.622766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.651 [2024-07-15 12:26:16.622823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.622839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.651 #39 NEW cov: 12161 ft: 15108 corp: 19/381b lim: 35 exec/s: 39 rss: 73Mb L: 14/35 MS: 1 InsertByte- 00:09:21.651 [2024-07-15 12:26:16.662665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.662691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.651 #40 NEW cov: 12161 ft: 15148 corp: 20/394b lim: 35 exec/s: 40 rss: 73Mb L: 13/35 MS: 1 ChangeByte- 00:09:21.651 [2024-07-15 12:26:16.702797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.702823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.651 #41 NEW cov: 12161 ft: 15193 corp: 21/407b lim: 35 exec/s: 41 rss: 73Mb L: 13/35 MS: 1 ChangeByte- 00:09:21.651 [2024-07-15 12:26:16.742893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.651 [2024-07-15 12:26:16.742920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.651 #42 NEW cov: 12161 ft: 15200 corp: 22/420b lim: 35 exec/s: 42 rss: 73Mb L: 13/35 MS: 1 PersAutoDict- DE: "5\231\017\266\362\211'\000"- 00:09:21.910 [2024-07-15 12:26:16.783482] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.783507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:16.783569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.783586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:16.783645] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.783661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:16.783718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.783733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.910 #43 NEW cov: 12161 ft: 15215 corp: 23/452b lim: 35 exec/s: 43 rss: 73Mb L: 32/35 MS: 1 InsertByte- 00:09:21.910 [2024-07-15 12:26:16.833188] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000035 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.833215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.910 #45 NEW cov: 12161 ft: 15249 corp: 24/462b lim: 35 exec/s: 45 rss: 73Mb L: 10/35 MS: 2 InsertByte-PersAutoDict- DE: "5\231\017\266\362\211'\000"- 00:09:21.910 [2024-07-15 12:26:16.873486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.873510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:16.873581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.873597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.910 #46 NEW cov: 12161 ft: 15261 corp: 25/477b lim: 35 exec/s: 46 rss: 73Mb L: 15/35 MS: 1 InsertRepeatedBytes- 00:09:21.910 [2024-07-15 12:26:16.913706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.913732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:16.913791] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000026 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.913807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:16.913868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.913883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.910 #47 NEW cov: 12161 ft: 15332 corp: 26/499b lim: 35 exec/s: 47 rss: 74Mb L: 22/35 MS: 1 ChangeByte- 00:09:21.910 [2024-07-15 12:26:16.953561] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:16.953586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.910 #48 NEW cov: 12161 ft: 15360 corp: 27/506b lim: 35 exec/s: 48 rss: 74Mb L: 7/35 MS: 1 CrossOver- 00:09:21.910 [2024-07-15 12:26:17.003862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:17.003890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.910 [2024-07-15 12:26:17.003950] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.910 [2024-07-15 12:26:17.003966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.168 #49 NEW cov: 12161 ft: 15376 corp: 28/526b lim: 35 exec/s: 49 rss: 74Mb L: 20/35 MS: 1 ChangeBit- 00:09:22.168 [2024-07-15 12:26:17.054006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.054034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.168 [2024-07-15 12:26:17.054094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.054110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.168 #50 NEW cov: 12161 ft: 15383 corp: 29/546b lim: 35 exec/s: 50 rss: 74Mb L: 20/35 MS: 1 ChangeBit- 00:09:22.168 [2024-07-15 12:26:17.094246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.094274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.168 [2024-07-15 12:26:17.094335] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000026 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.094351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.168 [2024-07-15 12:26:17.094407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.094424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:22.168 #51 NEW cov: 12161 ft: 15391 corp: 30/568b lim: 35 exec/s: 51 rss: 74Mb L: 22/35 MS: 1 ShuffleBytes- 00:09:22.168 [2024-07-15 12:26:17.144076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.144105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.168 #52 NEW cov: 12161 ft: 15405 corp: 31/581b lim: 35 exec/s: 52 rss: 74Mb L: 13/35 MS: 1 ChangeBit- 00:09:22.168 [2024-07-15 12:26:17.184552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.184579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.168 [2024-07-15 12:26:17.184639] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.184656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.168 [2024-07-15 12:26:17.184713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.184729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:22.168 #53 NEW cov: 12161 ft: 15428 corp: 32/603b lim: 35 exec/s: 53 rss: 74Mb L: 22/35 MS: 1 ChangeBit- 00:09:22.168 [2024-07-15 12:26:17.224468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.168 [2024-07-15 12:26:17.224494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.169 [2024-07-15 12:26:17.224566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.169 [2024-07-15 12:26:17.224582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.169 #54 NEW cov: 12161 ft: 15482 corp: 33/618b lim: 35 exec/s: 54 rss: 74Mb L: 15/35 MS: 1 ChangeBit- 00:09:22.169 [2024-07-15 12:26:17.274772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.169 [2024-07-15 12:26:17.274798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.169 [2024-07-15 12:26:17.274859] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000b6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.169 [2024-07-15 12:26:17.274875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.169 [2024-07-15 12:26:17.274929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.169 [2024-07-15 12:26:17.274943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:22.426 #55 NEW cov: 12161 ft: 15490 corp: 34/640b lim: 35 exec/s: 55 rss: 74Mb L: 22/35 MS: 1 ChangeBit- 00:09:22.426 [2024-07-15 12:26:17.324914] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.324940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.324996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000026 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.325012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.325063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.325078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:22.426 #56 NEW cov: 12161 ft: 15492 corp: 35/662b lim: 35 exec/s: 56 rss: 74Mb L: 22/35 MS: 1 ChangeBinInt- 00:09:22.426 [2024-07-15 12:26:17.365159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.365184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.365258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.365272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.365327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.365341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.365396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.365411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:22.426 #57 NEW cov: 12161 ft: 15502 corp: 36/693b lim: 35 exec/s: 57 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:09:22.426 [2024-07-15 12:26:17.415040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000040 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.415065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.415120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.415136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.426 #58 NEW cov: 12161 ft: 15507 corp: 37/707b lim: 35 exec/s: 58 rss: 74Mb L: 14/35 MS: 1 CrossOver- 00:09:22.426 [2024-07-15 12:26:17.455017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.455044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.426 #59 NEW cov: 12161 ft: 15633 corp: 38/720b lim: 35 exec/s: 59 rss: 74Mb L: 13/35 MS: 1 ChangeBit- 00:09:22.426 [2024-07-15 12:26:17.505283] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.505309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.505369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.505386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.426 #60 NEW cov: 12161 ft: 15637 corp: 39/740b lim: 35 exec/s: 60 rss: 74Mb L: 20/35 MS: 1 ChangeBinInt- 00:09:22.426 [2024-07-15 12:26:17.545389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.545413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.426 [2024-07-15 12:26:17.545471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000003e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.426 [2024-07-15 12:26:17.545486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:22.684 #61 NEW cov: 12161 ft: 15650 corp: 40/756b lim: 35 exec/s: 30 rss: 74Mb L: 16/35 MS: 1 InsertByte- 00:09:22.684 #61 DONE cov: 12161 ft: 15650 corp: 40/756b lim: 35 exec/s: 30 rss: 74Mb 00:09:22.684 ###### Recommended dictionary. ###### 00:09:22.684 "5\231\017\266\362\211'\000" # Uses: 2 00:09:22.684 ###### End of recommended dictionary. ###### 00:09:22.684 Done 61 runs in 2 second(s) 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:22.684 12:26:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:09:22.684 [2024-07-15 12:26:17.752776] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:22.684 [2024-07-15 12:26:17.752865] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163337 ] 00:09:22.684 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.941 [2024-07-15 12:26:17.954279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.941 [2024-07-15 12:26:18.027543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.198 [2024-07-15 12:26:18.087002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.198 [2024-07-15 12:26:18.103203] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:09:23.198 INFO: Running with entropic power schedule (0xFF, 100). 00:09:23.198 INFO: Seed: 2559306582 00:09:23.198 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:23.198 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:23.198 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:23.198 INFO: A corpus is not provided, starting from an empty corpus 00:09:23.198 #2 INITED exec/s: 0 rss: 65Mb 00:09:23.198 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:23.198 This may also happen if the target rejected all inputs we tried so far 00:09:23.456 NEW_FUNC[1/682]: 0x499490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:09:23.456 NEW_FUNC[2/682]: 0x4b9410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:23.456 #8 NEW cov: 11750 ft: 11751 corp: 2/10b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:09:23.456 #10 NEW cov: 11880 ft: 12362 corp: 3/17b lim: 35 exec/s: 0 rss: 72Mb L: 7/9 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:23.456 #11 NEW cov: 11886 ft: 12681 corp: 4/26b lim: 35 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:23.713 #12 NEW cov: 11971 ft: 12904 corp: 5/36b lim: 35 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:09:23.713 [2024-07-15 12:26:18.609644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TIMESTAMP cid:4 cdw10:0000050e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.713 [2024-07-15 12:26:18.609689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.714 NEW_FUNC[1/14]: 0x17986e0 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:09:23.714 NEW_FUNC[2/14]: 0x1798920 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:09:23.714 #13 NEW cov: 12098 ft: 13131 corp: 6/43b lim: 35 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ChangeBit- 00:09:23.714 #14 NEW cov: 12098 ft: 13216 corp: 7/50b lim: 35 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ShuffleBytes- 00:09:23.714 [2024-07-15 12:26:18.700003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.714 [2024-07-15 12:26:18.700033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.714 #15 NEW cov: 12100 ft: 13462 corp: 8/70b lim: 35 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:09:23.714 [2024-07-15 12:26:18.750433] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.714 [2024-07-15 12:26:18.750457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.714 [2024-07-15 12:26:18.750515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.714 [2024-07-15 12:26:18.750534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:23.714 [2024-07-15 12:26:18.750608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005b4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.714 [2024-07-15 12:26:18.750622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:23.714 #16 NEW cov: 12100 ft: 14043 corp: 9/102b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:09:23.714 #17 NEW cov: 12100 ft: 14110 corp: 10/112b lim: 35 exec/s: 0 rss: 72Mb L: 10/32 MS: 1 CopyPart- 00:09:23.971 NEW_FUNC[1/1]: 0x4b6ee0 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:09:23.971 #18 NEW cov: 12132 ft: 14204 corp: 11/119b lim: 35 exec/s: 0 rss: 72Mb L: 7/32 MS: 1 ChangeBinInt- 00:09:23.971 #19 NEW cov: 12132 ft: 14240 corp: 12/132b lim: 35 exec/s: 0 rss: 72Mb L: 13/32 MS: 1 EraseBytes- 00:09:23.971 [2024-07-15 12:26:18.930768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.971 [2024-07-15 12:26:18.930794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.971 [2024-07-15 12:26:18.930855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.971 [2024-07-15 12:26:18.930869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:23.971 #20 NEW cov: 12132 ft: 14415 corp: 13/153b lim: 35 exec/s: 0 rss: 72Mb L: 21/32 MS: 1 InsertByte- 00:09:23.971 [2024-07-15 12:26:18.971021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.971 [2024-07-15 12:26:18.971047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.971 [2024-07-15 12:26:18.971105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.971 [2024-07-15 12:26:18.971119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:23.971 [2024-07-15 12:26:18.971175] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000005b4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.971 [2024-07-15 12:26:18.971189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:23.971 #21 NEW cov: 12132 ft: 14513 corp: 14/185b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:23.971 [2024-07-15 12:26:19.020895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:23.971 [2024-07-15 12:26:19.020921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.971 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:23.971 #27 NEW cov: 12155 ft: 14617 corp: 15/205b lim: 35 exec/s: 0 rss: 72Mb L: 20/32 MS: 1 ChangeBinInt- 00:09:23.971 #28 NEW cov: 12155 ft: 14635 corp: 16/212b lim: 35 exec/s: 0 rss: 72Mb L: 7/32 MS: 1 ChangeByte- 00:09:24.229 [2024-07-15 12:26:19.101177] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.229 [2024-07-15 12:26:19.101203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.229 #29 NEW cov: 12155 ft: 14663 corp: 17/229b lim: 35 exec/s: 0 rss: 73Mb L: 17/32 MS: 1 CopyPart- 00:09:24.229 [2024-07-15 12:26:19.151108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.229 [2024-07-15 12:26:19.151135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.229 #30 NEW cov: 12155 ft: 14680 corp: 18/238b lim: 35 exec/s: 30 rss: 73Mb L: 9/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:24.229 [2024-07-15 12:26:19.191407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.191433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.230 #31 NEW cov: 12155 ft: 14691 corp: 19/257b lim: 35 exec/s: 31 rss: 73Mb L: 19/32 MS: 1 CrossOver- 00:09:24.230 [2024-07-15 12:26:19.231796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.231825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.230 [2024-07-15 12:26:19.231887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.231901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:24.230 [2024-07-15 12:26:19.231957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.231970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:24.230 #32 NEW cov: 12155 ft: 14701 corp: 20/288b lim: 35 exec/s: 32 rss: 73Mb L: 31/32 MS: 1 InsertRepeatedBytes- 00:09:24.230 [2024-07-15 12:26:19.271464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.271489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.230 #33 NEW cov: 12155 ft: 14732 corp: 21/295b lim: 35 exec/s: 33 rss: 73Mb L: 7/32 MS: 1 ShuffleBytes- 00:09:24.230 [2024-07-15 12:26:19.322068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.322094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.230 [2024-07-15 12:26:19.322151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000496 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.322165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:24.230 [2024-07-15 12:26:19.322220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000096 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.230 [2024-07-15 12:26:19.322235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:24.230 #34 NEW cov: 12155 ft: 14805 corp: 22/326b lim: 35 exec/s: 34 rss: 73Mb L: 31/32 MS: 1 ShuffleBytes- 00:09:24.488 NEW_FUNC[1/1]: 0x4b6a10 in feat_volatile_write_cache /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:312 00:09:24.488 #35 NEW cov: 12169 ft: 14846 corp: 23/333b lim: 35 exec/s: 35 rss: 73Mb L: 7/32 MS: 1 ChangeBit- 00:09:24.488 [2024-07-15 12:26:19.421898] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.488 [2024-07-15 12:26:19.421924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.488 #36 NEW cov: 12169 ft: 14922 corp: 24/341b lim: 35 exec/s: 36 rss: 73Mb L: 8/32 MS: 1 InsertByte- 00:09:24.488 [2024-07-15 12:26:19.472033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.488 [2024-07-15 12:26:19.472058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.488 #37 NEW cov: 12169 ft: 14941 corp: 25/349b lim: 35 exec/s: 37 rss: 73Mb L: 8/32 MS: 1 ChangeByte- 00:09:24.488 [2024-07-15 12:26:19.522176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.488 [2024-07-15 12:26:19.522200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.488 #38 NEW cov: 12169 ft: 14957 corp: 26/358b lim: 35 exec/s: 38 rss: 73Mb L: 9/32 MS: 1 ChangeBit- 00:09:24.488 [2024-07-15 12:26:19.572333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.488 [2024-07-15 12:26:19.572361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.488 #39 NEW cov: 12169 ft: 14968 corp: 27/367b lim: 35 exec/s: 39 rss: 73Mb L: 9/32 MS: 1 ChangeBit- 00:09:24.746 #40 NEW cov: 12169 ft: 14989 corp: 28/380b lim: 35 exec/s: 40 rss: 73Mb L: 13/32 MS: 1 ChangeBit- 00:09:24.746 [2024-07-15 12:26:19.662725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.662750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.746 #41 NEW cov: 12169 ft: 15000 corp: 29/400b lim: 35 exec/s: 41 rss: 73Mb L: 20/32 MS: 1 ChangeByte- 00:09:24.746 [2024-07-15 12:26:19.712809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.712834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.746 [2024-07-15 12:26:19.712895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.712909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.746 #42 NEW cov: 12169 ft: 15009 corp: 30/418b lim: 35 exec/s: 42 rss: 73Mb L: 18/32 MS: 1 InsertRepeatedBytes- 00:09:24.746 [2024-07-15 12:26:19.753112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.753137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.746 [2024-07-15 12:26:19.753193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.753206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:24.746 #43 NEW cov: 12169 ft: 15012 corp: 31/439b lim: 35 exec/s: 43 rss: 73Mb L: 21/32 MS: 1 ChangeBinInt- 00:09:24.746 [2024-07-15 12:26:19.803064] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.803088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.746 [2024-07-15 12:26:19.803149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000037d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.803162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.746 #44 NEW cov: 12169 ft: 15021 corp: 32/457b lim: 35 exec/s: 44 rss: 73Mb L: 18/32 MS: 1 ChangeBinInt- 00:09:24.746 [2024-07-15 12:26:19.853190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.853215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.746 [2024-07-15 12:26:19.853272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000007d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:24.746 [2024-07-15 12:26:19.853285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.003 #45 NEW cov: 12169 ft: 15026 corp: 33/475b lim: 35 exec/s: 45 rss: 73Mb L: 18/32 MS: 1 ShuffleBytes- 00:09:25.004 [2024-07-15 12:26:19.903382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:19.903407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.004 #46 NEW cov: 12169 ft: 15031 corp: 34/494b lim: 35 exec/s: 46 rss: 74Mb L: 19/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:25.004 [2024-07-15 12:26:19.953605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:19.953631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.004 [2024-07-15 12:26:19.953690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000007d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:19.953703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.004 [2024-07-15 12:26:19.953759] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:19.953772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.004 #47 NEW cov: 12169 ft: 15058 corp: 35/520b lim: 35 exec/s: 47 rss: 74Mb L: 26/32 MS: 1 CMP- DE: "\306\241\204\221\364\211'\000"- 00:09:25.004 [2024-07-15 12:26:20.003676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:20.003703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.004 #48 NEW cov: 12169 ft: 15063 corp: 36/538b lim: 35 exec/s: 48 rss: 74Mb L: 18/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:25.004 [2024-07-15 12:26:20.054021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:20.054053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.004 [2024-07-15 12:26:20.054114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.004 [2024-07-15 12:26:20.054128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.004 #54 NEW cov: 12169 ft: 15075 corp: 37/562b lim: 35 exec/s: 54 rss: 74Mb L: 24/32 MS: 1 CopyPart- 00:09:25.263 #55 NEW cov: 12169 ft: 15081 corp: 38/569b lim: 35 exec/s: 55 rss: 74Mb L: 7/32 MS: 1 ChangeBinInt- 00:09:25.263 [2024-07-15 12:26:20.153915] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.263 [2024-07-15 12:26:20.153949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.263 #56 NEW cov: 12169 ft: 15132 corp: 39/578b lim: 35 exec/s: 28 rss: 74Mb L: 9/32 MS: 1 ChangeByte- 00:09:25.263 #56 DONE cov: 12169 ft: 15132 corp: 39/578b lim: 35 exec/s: 28 rss: 74Mb 00:09:25.263 ###### Recommended dictionary. ###### 00:09:25.263 "\000\000\000\000\000\000\000\000" # Uses: 3 00:09:25.263 "\306\241\204\221\364\211'\000" # Uses: 0 00:09:25.263 ###### End of recommended dictionary. ###### 00:09:25.263 Done 56 runs in 2 second(s) 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:25.263 12:26:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:09:25.263 [2024-07-15 12:26:20.348893] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:25.263 [2024-07-15 12:26:20.348966] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163647 ] 00:09:25.263 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.521 [2024-07-15 12:26:20.547149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.521 [2024-07-15 12:26:20.620945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.780 [2024-07-15 12:26:20.680874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.780 [2024-07-15 12:26:20.697075] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:09:25.780 INFO: Running with entropic power schedule (0xFF, 100). 00:09:25.780 INFO: Seed: 857331675 00:09:25.780 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:25.780 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:25.780 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:25.780 INFO: A corpus is not provided, starting from an empty corpus 00:09:25.780 #2 INITED exec/s: 0 rss: 65Mb 00:09:25.780 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:25.780 This may also happen if the target rejected all inputs we tried so far 00:09:25.780 [2024-07-15 12:26:20.752483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.780 [2024-07-15 12:26:20.752515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.780 [2024-07-15 12:26:20.752558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.780 [2024-07-15 12:26:20.752575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.780 [2024-07-15 12:26:20.752627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.780 [2024-07-15 12:26:20.752642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.038 NEW_FUNC[1/696]: 0x49a940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:09:26.038 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:26.038 #6 NEW cov: 11969 ft: 11958 corp: 2/79b lim: 105 exec/s: 0 rss: 71Mb L: 78/78 MS: 4 CrossOver-EraseBytes-CrossOver-InsertRepeatedBytes- 00:09:26.038 [2024-07-15 12:26:21.093198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802236360214 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.038 [2024-07-15 12:26:21.093252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.038 #7 NEW cov: 12099 ft: 13142 corp: 3/102b lim: 105 exec/s: 0 rss: 72Mb L: 23/78 MS: 1 InsertRepeatedBytes- 00:09:26.038 [2024-07-15 12:26:21.133416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.038 [2024-07-15 12:26:21.133444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.038 [2024-07-15 12:26:21.133483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.038 [2024-07-15 12:26:21.133501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.038 [2024-07-15 12:26:21.133575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.038 [2024-07-15 12:26:21.133593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.297 #8 NEW cov: 12105 ft: 13346 corp: 4/180b lim: 105 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 ShuffleBytes- 00:09:26.297 [2024-07-15 12:26:21.183559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.183590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.183633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.183651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.183707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.183726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.297 #9 NEW cov: 12190 ft: 13626 corp: 5/258b lim: 105 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 ChangeBit- 00:09:26.297 [2024-07-15 12:26:21.223417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802249664022 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.223446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.297 #10 NEW cov: 12190 ft: 13737 corp: 6/281b lim: 105 exec/s: 0 rss: 72Mb L: 23/78 MS: 1 ChangeBinInt- 00:09:26.297 [2024-07-15 12:26:21.273531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802249664022 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.273557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.297 #11 NEW cov: 12190 ft: 13828 corp: 7/305b lim: 105 exec/s: 0 rss: 72Mb L: 24/78 MS: 1 CrossOver- 00:09:26.297 [2024-07-15 12:26:21.323901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.323931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.323969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.323985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.324039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.324055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.297 #12 NEW cov: 12190 ft: 13923 corp: 8/383b lim: 105 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 ChangeASCIIInt- 00:09:26.297 [2024-07-15 12:26:21.374034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382932103893203385 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.374061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.374108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.374123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.374180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.374196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.297 #18 NEW cov: 12190 ft: 13953 corp: 9/462b lim: 105 exec/s: 0 rss: 72Mb L: 79/79 MS: 1 InsertByte- 00:09:26.297 [2024-07-15 12:26:21.414184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.414210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.414257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.414272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.297 [2024-07-15 12:26:21.414326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.297 [2024-07-15 12:26:21.414341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.556 #19 NEW cov: 12190 ft: 14091 corp: 10/540b lim: 105 exec/s: 0 rss: 72Mb L: 78/79 MS: 1 CrossOver- 00:09:26.556 [2024-07-15 12:26:21.454073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802236360214 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.454101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.556 #20 NEW cov: 12190 ft: 14191 corp: 11/563b lim: 105 exec/s: 0 rss: 72Mb L: 23/79 MS: 1 ChangeByte- 00:09:26.556 [2024-07-15 12:26:21.494571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.494598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.494644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.494659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.494713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.494730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.494784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.494800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.556 #21 NEW cov: 12190 ft: 14707 corp: 12/658b lim: 105 exec/s: 0 rss: 72Mb L: 95/95 MS: 1 CopyPart- 00:09:26.556 [2024-07-15 12:26:21.534552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.534578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.534627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.534643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.534699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.534715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.556 #27 NEW cov: 12190 ft: 14784 corp: 13/736b lim: 105 exec/s: 0 rss: 72Mb L: 78/95 MS: 1 ShuffleBytes- 00:09:26.556 [2024-07-15 12:26:21.574790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.574818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.574873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.574889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.574943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.574959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.575012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.575028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:26.556 #33 NEW cov: 12190 ft: 14807 corp: 14/833b lim: 105 exec/s: 0 rss: 72Mb L: 97/97 MS: 1 CopyPart- 00:09:26.556 [2024-07-15 12:26:21.624814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.624841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.624881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.624898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.556 [2024-07-15 12:26:21.624957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.624974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.556 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:26.556 #34 NEW cov: 12213 ft: 14853 corp: 15/912b lim: 105 exec/s: 0 rss: 73Mb L: 79/97 MS: 1 InsertByte- 00:09:26.556 [2024-07-15 12:26:21.674725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802283218454 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.556 [2024-07-15 12:26:21.674751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.814 #35 NEW cov: 12213 ft: 14895 corp: 16/936b lim: 105 exec/s: 0 rss: 73Mb L: 24/97 MS: 1 ChangeBinInt- 00:09:26.814 [2024-07-15 12:26:21.724981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591543407805797910 len:19533 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.814 [2024-07-15 12:26:21.725010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.815 [2024-07-15 12:26:21.725049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.725066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.815 #36 NEW cov: 12213 ft: 15249 corp: 17/986b lim: 105 exec/s: 36 rss: 73Mb L: 50/97 MS: 1 InsertRepeatedBytes- 00:09:26.815 [2024-07-15 12:26:21.765090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591543407805797910 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.765120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.815 [2024-07-15 12:26:21.765172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.765188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.815 #37 NEW cov: 12213 ft: 15278 corp: 18/1036b lim: 105 exec/s: 37 rss: 73Mb L: 50/97 MS: 1 CrossOver- 00:09:26.815 [2024-07-15 12:26:21.815221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.815249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.815 [2024-07-15 12:26:21.815288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.815304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.815 #38 NEW cov: 12213 ft: 15291 corp: 19/1089b lim: 105 exec/s: 38 rss: 73Mb L: 53/97 MS: 1 EraseBytes- 00:09:26.815 [2024-07-15 12:26:21.865274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2744405306843207190 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.865301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.815 #39 NEW cov: 12213 ft: 15301 corp: 20/1113b lim: 105 exec/s: 39 rss: 73Mb L: 24/97 MS: 1 InsertByte- 00:09:26.815 [2024-07-15 12:26:21.905404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802249664022 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.815 [2024-07-15 12:26:21.905433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.815 #40 NEW cov: 12213 ft: 15323 corp: 21/1137b lim: 105 exec/s: 40 rss: 73Mb L: 24/97 MS: 1 ChangeByte- 00:09:27.073 [2024-07-15 12:26:21.945766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.073 [2024-07-15 12:26:21.945796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.073 [2024-07-15 12:26:21.945837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.073 [2024-07-15 12:26:21.945854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.073 [2024-07-15 12:26:21.945909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975032670905 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.073 [2024-07-15 12:26:21.945926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.073 #41 NEW cov: 12213 ft: 15334 corp: 22/1215b lim: 105 exec/s: 41 rss: 73Mb L: 78/97 MS: 1 CopyPart- 00:09:27.073 [2024-07-15 12:26:21.985723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.073 [2024-07-15 12:26:21.985751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.073 [2024-07-15 12:26:21.985803] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.073 [2024-07-15 12:26:21.985818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.073 #42 NEW cov: 12213 ft: 15357 corp: 23/1268b lim: 105 exec/s: 42 rss: 73Mb L: 53/97 MS: 1 CopyPart- 00:09:27.074 [2024-07-15 12:26:22.035833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591543407805797910 len:19533 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.035861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.074 [2024-07-15 12:26:22.035900] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:5497853135693827148 len:19533 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.035916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.074 #43 NEW cov: 12213 ft: 15395 corp: 24/1318b lim: 105 exec/s: 43 rss: 73Mb L: 50/97 MS: 1 ChangeByte- 00:09:27.074 [2024-07-15 12:26:22.075991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.076019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.074 [2024-07-15 12:26:22.076059] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.076076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.074 #44 NEW cov: 12213 ft: 15440 corp: 25/1371b lim: 105 exec/s: 44 rss: 73Mb L: 53/97 MS: 1 ShuffleBytes- 00:09:27.074 [2024-07-15 12:26:22.116113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2744405306843207190 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.116139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.074 [2024-07-15 12:26:22.116178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.116193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.074 #45 NEW cov: 12213 ft: 15458 corp: 26/1429b lim: 105 exec/s: 45 rss: 73Mb L: 58/97 MS: 1 InsertRepeatedBytes- 00:09:27.074 [2024-07-15 12:26:22.166398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382932103893203385 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.166425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.074 [2024-07-15 12:26:22.166465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.166481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.074 [2024-07-15 12:26:22.166536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.074 [2024-07-15 12:26:22.166552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.332 #46 NEW cov: 12213 ft: 15473 corp: 27/1508b lim: 105 exec/s: 46 rss: 73Mb L: 79/97 MS: 1 ChangeByte- 00:09:27.332 [2024-07-15 12:26:22.216557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802249664022 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.216587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.332 [2024-07-15 12:26:22.216642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.216658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.332 [2024-07-15 12:26:22.216713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.216730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.332 #47 NEW cov: 12213 ft: 15510 corp: 28/1573b lim: 105 exec/s: 47 rss: 73Mb L: 65/97 MS: 1 InsertRepeatedBytes- 00:09:27.332 [2024-07-15 12:26:22.256416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802249664022 len:5696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.256442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.332 #48 NEW cov: 12213 ft: 15567 corp: 29/1598b lim: 105 exec/s: 48 rss: 73Mb L: 25/97 MS: 1 InsertByte- 00:09:27.332 [2024-07-15 12:26:22.306851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:17991 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.306878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.332 [2024-07-15 12:26:22.306924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.306943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.332 [2024-07-15 12:26:22.306997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.307014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.332 #49 NEW cov: 12213 ft: 15593 corp: 30/1676b lim: 105 exec/s: 49 rss: 73Mb L: 78/97 MS: 1 ChangeBinInt- 00:09:27.332 [2024-07-15 12:26:22.346781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.346808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.332 [2024-07-15 12:26:22.346857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13402473389046217145 len:21800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.332 [2024-07-15 12:26:22.346875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.332 #50 NEW cov: 12213 ft: 15601 corp: 31/1729b lim: 105 exec/s: 50 rss: 73Mb L: 53/97 MS: 1 CMP- DE: "\377&\211\365\331U'`"- 00:09:27.332 [2024-07-15 12:26:22.396882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.333 [2024-07-15 12:26:22.396909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.333 [2024-07-15 12:26:22.396948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.333 [2024-07-15 12:26:22.396964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.333 #51 NEW cov: 12213 ft: 15611 corp: 32/1780b lim: 105 exec/s: 51 rss: 73Mb L: 51/97 MS: 1 EraseBytes- 00:09:27.333 [2024-07-15 12:26:22.436895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1591483802236360214 len:5655 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.333 [2024-07-15 12:26:22.436921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.333 #52 NEW cov: 12213 ft: 15646 corp: 33/1812b lim: 105 exec/s: 52 rss: 73Mb L: 32/97 MS: 1 InsertRepeatedBytes- 00:09:27.591 [2024-07-15 12:26:22.477281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184342 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.591 [2024-07-15 12:26:22.477310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.591 [2024-07-15 12:26:22.477350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.591 [2024-07-15 12:26:22.477367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.591 [2024-07-15 12:26:22.477421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.591 [2024-07-15 12:26:22.477436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.591 #53 NEW cov: 12213 ft: 15693 corp: 34/1891b lim: 105 exec/s: 53 rss: 73Mb L: 79/97 MS: 1 CrossOver- 00:09:27.591 [2024-07-15 12:26:22.527310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.591 [2024-07-15 12:26:22.527340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.591 [2024-07-15 12:26:22.527396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13402473389046217145 len:21800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.591 [2024-07-15 12:26:22.527412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.591 #54 NEW cov: 12213 ft: 15767 corp: 35/1952b lim: 105 exec/s: 54 rss: 73Mb L: 61/97 MS: 1 PersAutoDict- DE: "\377&\211\365\331U'`"- 00:09:27.591 [2024-07-15 12:26:22.577662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.577690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.577740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.577755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.577806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.577822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.577876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.577892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:27.592 #55 NEW cov: 12213 ft: 15789 corp: 36/2047b lim: 105 exec/s: 55 rss: 73Mb L: 95/97 MS: 1 ShuffleBytes- 00:09:27.592 [2024-07-15 12:26:22.627681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2744661561771955734 len:55638 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.627707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.627754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.627771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.627824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.627840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.592 #56 NEW cov: 12213 ft: 15791 corp: 37/2113b lim: 105 exec/s: 56 rss: 74Mb L: 66/97 MS: 1 PersAutoDict- DE: "\377&\211\365\331U'`"- 00:09:27.592 [2024-07-15 12:26:22.677713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446463525266766265 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.677741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.677781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13402473389046217145 len:21800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.677798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.592 #57 NEW cov: 12213 ft: 15817 corp: 38/2166b lim: 105 exec/s: 57 rss: 74Mb L: 53/97 MS: 1 CMP- DE: "\377\377\000\327"- 00:09:27.592 [2024-07-15 12:26:22.717973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.718002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.718045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.718061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.592 [2024-07-15 12:26:22.718114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13382931975032670905 len:47546 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.592 [2024-07-15 12:26:22.718130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.851 #58 NEW cov: 12213 ft: 15822 corp: 39/2247b lim: 105 exec/s: 29 rss: 74Mb L: 81/97 MS: 1 CrossOver- 00:09:27.851 #58 DONE cov: 12213 ft: 15822 corp: 39/2247b lim: 105 exec/s: 29 rss: 74Mb 00:09:27.851 ###### Recommended dictionary. ###### 00:09:27.851 "\377&\211\365\331U'`" # Uses: 2 00:09:27.851 "\377\377\000\327" # Uses: 0 00:09:27.851 ###### End of recommended dictionary. ###### 00:09:27.851 Done 58 runs in 2 second(s) 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:27.851 12:26:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:09:27.852 [2024-07-15 12:26:22.930419] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:27.852 [2024-07-15 12:26:22.930499] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164001 ] 00:09:27.852 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.418 [2024-07-15 12:26:23.243987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.418 [2024-07-15 12:26:23.330912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.418 [2024-07-15 12:26:23.390867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.418 [2024-07-15 12:26:23.407082] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:09:28.418 INFO: Running with entropic power schedule (0xFF, 100). 00:09:28.418 INFO: Seed: 3566333363 00:09:28.418 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:28.418 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:28.418 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:28.418 INFO: A corpus is not provided, starting from an empty corpus 00:09:28.418 #2 INITED exec/s: 0 rss: 64Mb 00:09:28.418 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:28.418 This may also happen if the target rejected all inputs we tried so far 00:09:28.418 [2024-07-15 12:26:23.455412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.418 [2024-07-15 12:26:23.455449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.418 [2024-07-15 12:26:23.455502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.418 [2024-07-15 12:26:23.455520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.676 NEW_FUNC[1/697]: 0x49dcc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:09:28.676 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:28.676 #9 NEW cov: 11990 ft: 11989 corp: 2/56b lim: 120 exec/s: 0 rss: 72Mb L: 55/55 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:28.934 [2024-07-15 12:26:23.816460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.816512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.934 [2024-07-15 12:26:23.816560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.816578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.934 [2024-07-15 12:26:23.816608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.816625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.934 [2024-07-15 12:26:23.816654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.816670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:28.934 #20 NEW cov: 12120 ft: 12943 corp: 3/162b lim: 120 exec/s: 0 rss: 72Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:09:28.934 [2024-07-15 12:26:23.876301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9727775195120271359 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.876333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.934 [2024-07-15 12:26:23.876387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.876406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.934 #21 NEW cov: 12126 ft: 13150 corp: 4/218b lim: 120 exec/s: 0 rss: 72Mb L: 56/106 MS: 1 InsertByte- 00:09:28.934 [2024-07-15 12:26:23.956541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.956571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.934 [2024-07-15 12:26:23.956605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.934 [2024-07-15 12:26:23.956629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.934 #22 NEW cov: 12211 ft: 13559 corp: 5/273b lim: 120 exec/s: 0 rss: 72Mb L: 55/106 MS: 1 ChangeBit- 00:09:28.934 [2024-07-15 12:26:24.016856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.935 [2024-07-15 12:26:24.016887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.935 [2024-07-15 12:26:24.016920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.935 [2024-07-15 12:26:24.016938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:28.935 [2024-07-15 12:26:24.016968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.935 [2024-07-15 12:26:24.016985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:28.935 [2024-07-15 12:26:24.017013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.935 [2024-07-15 12:26:24.017030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.193 #28 NEW cov: 12211 ft: 13696 corp: 6/379b lim: 120 exec/s: 0 rss: 72Mb L: 106/106 MS: 1 ShuffleBytes- 00:09:29.193 [2024-07-15 12:26:24.097021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.097050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.193 [2024-07-15 12:26:24.097082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.097099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.193 [2024-07-15 12:26:24.097128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.097159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.193 [2024-07-15 12:26:24.097188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.097205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.193 #29 NEW cov: 12211 ft: 13847 corp: 7/485b lim: 120 exec/s: 0 rss: 72Mb L: 106/106 MS: 1 ChangeBit- 00:09:29.193 [2024-07-15 12:26:24.177258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.177289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.193 [2024-07-15 12:26:24.177323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.177349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.193 [2024-07-15 12:26:24.177379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.177396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.193 [2024-07-15 12:26:24.177425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.193 [2024-07-15 12:26:24.177441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.194 #30 NEW cov: 12211 ft: 13926 corp: 8/591b lim: 120 exec/s: 0 rss: 72Mb L: 106/106 MS: 1 ShuffleBytes- 00:09:29.194 [2024-07-15 12:26:24.257414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.194 [2024-07-15 12:26:24.257446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.194 [2024-07-15 12:26:24.257495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.194 [2024-07-15 12:26:24.257513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.194 [2024-07-15 12:26:24.257551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.194 [2024-07-15 12:26:24.257568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.194 [2024-07-15 12:26:24.257597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.194 [2024-07-15 12:26:24.257614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.194 #31 NEW cov: 12211 ft: 13993 corp: 9/701b lim: 120 exec/s: 0 rss: 73Mb L: 110/110 MS: 1 CopyPart- 00:09:29.194 [2024-07-15 12:26:24.307385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.194 [2024-07-15 12:26:24.307414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.194 [2024-07-15 12:26:24.307464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.194 [2024-07-15 12:26:24.307481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.453 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:29.453 #32 NEW cov: 12234 ft: 14042 corp: 10/756b lim: 120 exec/s: 0 rss: 73Mb L: 55/110 MS: 1 ChangeBit- 00:09:29.453 [2024-07-15 12:26:24.357696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.357730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.357764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.357790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.357835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.357852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.357882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.357899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.453 #33 NEW cov: 12234 ft: 14089 corp: 11/862b lim: 120 exec/s: 0 rss: 73Mb L: 106/110 MS: 1 ChangeBinInt- 00:09:29.453 [2024-07-15 12:26:24.417872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.417903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.417953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.417971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.418002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.418019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.418051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.418069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.453 #34 NEW cov: 12234 ft: 14116 corp: 12/958b lim: 120 exec/s: 34 rss: 73Mb L: 96/110 MS: 1 EraseBytes- 00:09:29.453 [2024-07-15 12:26:24.498047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.498079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.498113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.498140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.498169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.498186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.498215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.498235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.453 #35 NEW cov: 12234 ft: 14147 corp: 13/1054b lim: 120 exec/s: 35 rss: 73Mb L: 96/110 MS: 1 ShuffleBytes- 00:09:29.453 [2024-07-15 12:26:24.578303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.578335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.578384] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9942764931608281383 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.578414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.578444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.578461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.453 [2024-07-15 12:26:24.578490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.453 [2024-07-15 12:26:24.578506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.711 #36 NEW cov: 12234 ft: 14188 corp: 14/1160b lim: 120 exec/s: 36 rss: 73Mb L: 106/110 MS: 1 CMP- DE: "\001'\211\373\314\006\345\320"- 00:09:29.711 [2024-07-15 12:26:24.638269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.638301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.638350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.638368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.712 #37 NEW cov: 12234 ft: 14269 corp: 15/1215b lim: 120 exec/s: 37 rss: 73Mb L: 55/110 MS: 1 ShuffleBytes- 00:09:29.712 [2024-07-15 12:26:24.718461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.718494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.718553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.718572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.712 #38 NEW cov: 12234 ft: 14325 corp: 16/1270b lim: 120 exec/s: 38 rss: 73Mb L: 55/110 MS: 1 ChangeBit- 00:09:29.712 [2024-07-15 12:26:24.769628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7451037803703024057 len:26472 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.769657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.769713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.769728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.769786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.769802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.769859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.769875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.712 #39 NEW cov: 12234 ft: 14472 corp: 17/1385b lim: 120 exec/s: 39 rss: 73Mb L: 115/115 MS: 1 InsertRepeatedBytes- 00:09:29.712 [2024-07-15 12:26:24.809704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5497853137529715129 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.809735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.809777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.809794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.809848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.809864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.712 [2024-07-15 12:26:24.809919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.712 [2024-07-15 12:26:24.809937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.970 #40 NEW cov: 12234 ft: 14523 corp: 18/1499b lim: 120 exec/s: 40 rss: 73Mb L: 114/115 MS: 1 InsertRepeatedBytes- 00:09:29.970 [2024-07-15 12:26:24.859906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931975044184249 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.970 [2024-07-15 12:26:24.859934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.970 [2024-07-15 12:26:24.859979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9942764931608281383 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.970 [2024-07-15 12:26:24.859995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.970 [2024-07-15 12:26:24.860049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.860065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.860119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.860135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.971 #41 NEW cov: 12234 ft: 14553 corp: 19/1605b lim: 120 exec/s: 41 rss: 73Mb L: 106/115 MS: 1 ChangeBit- 00:09:29.971 [2024-07-15 12:26:24.909997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.910023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.910081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.910097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.910150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.910165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.910219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.910234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.971 #42 NEW cov: 12234 ft: 14587 corp: 20/1701b lim: 120 exec/s: 42 rss: 73Mb L: 96/115 MS: 1 ChangeBit- 00:09:29.971 [2024-07-15 12:26:24.960110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.960137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.960186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.960202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.960256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.960272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:24.960326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044182457 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:24.960342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.971 #43 NEW cov: 12234 ft: 14663 corp: 21/1807b lim: 120 exec/s: 43 rss: 73Mb L: 106/115 MS: 1 ChangeBit- 00:09:29.971 [2024-07-15 12:26:25.010154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.010182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.010225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.010240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.010295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.010311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.971 #44 NEW cov: 12234 ft: 15008 corp: 22/1889b lim: 120 exec/s: 44 rss: 73Mb L: 82/115 MS: 1 EraseBytes- 00:09:29.971 [2024-07-15 12:26:25.050383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.050414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.050450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.050467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.050522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.050543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.050596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.050611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:29.971 #45 NEW cov: 12234 ft: 15048 corp: 23/1995b lim: 120 exec/s: 45 rss: 73Mb L: 106/115 MS: 1 PersAutoDict- DE: "\001'\211\373\314\006\345\320"- 00:09:29.971 [2024-07-15 12:26:25.090503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.090534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.090582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:9942647283864109351 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.090599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.090653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.090668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:29.971 [2024-07-15 12:26:25.090724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.971 [2024-07-15 12:26:25.090740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.230 #46 NEW cov: 12234 ft: 15057 corp: 24/2101b lim: 120 exec/s: 46 rss: 73Mb L: 106/115 MS: 1 ChangeByte- 00:09:30.230 [2024-07-15 12:26:25.130621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.130648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.230 [2024-07-15 12:26:25.130695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.130711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.230 [2024-07-15 12:26:25.130764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100594419270 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.130780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.230 [2024-07-15 12:26:25.130833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931972043952313 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.130852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.230 #47 NEW cov: 12234 ft: 15064 corp: 25/2207b lim: 120 exec/s: 47 rss: 73Mb L: 106/115 MS: 1 PersAutoDict- DE: "\001'\211\373\314\006\345\320"- 00:09:30.230 [2024-07-15 12:26:25.170730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.170757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.230 [2024-07-15 12:26:25.170824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.170840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.230 [2024-07-15 12:26:25.170895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:5063812100602282310 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.170911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.230 [2024-07-15 12:26:25.170965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975351231952 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.230 [2024-07-15 12:26:25.170980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.230 #48 NEW cov: 12234 ft: 15085 corp: 26/2314b lim: 120 exec/s: 48 rss: 73Mb L: 107/115 MS: 1 InsertByte- 00:09:30.230 [2024-07-15 12:26:25.220858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.220885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.220934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.220950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.221004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.221020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.221074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.221090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.231 #49 NEW cov: 12234 ft: 15092 corp: 27/2433b lim: 120 exec/s: 49 rss: 73Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:09:30.231 [2024-07-15 12:26:25.261073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.261100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.261153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.261171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.261223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.261242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.261294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.261311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.231 #50 NEW cov: 12234 ft: 15106 corp: 28/2536b lim: 120 exec/s: 50 rss: 74Mb L: 103/119 MS: 1 CopyPart- 00:09:30.231 [2024-07-15 12:26:25.310782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.310810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.231 #51 NEW cov: 12234 ft: 15953 corp: 29/2564b lim: 120 exec/s: 51 rss: 74Mb L: 28/119 MS: 1 EraseBytes- 00:09:30.231 [2024-07-15 12:26:25.351352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:5497853137529715129 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.351378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.351424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.351439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.351493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.351508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.231 [2024-07-15 12:26:25.351584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.231 [2024-07-15 12:26:25.351600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.489 #52 NEW cov: 12234 ft: 15972 corp: 30/2683b lim: 120 exec/s: 52 rss: 74Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:09:30.490 [2024-07-15 12:26:25.401182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.490 [2024-07-15 12:26:25.401209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.490 [2024-07-15 12:26:25.401247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709510911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.490 [2024-07-15 12:26:25.401262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.490 #53 NEW cov: 12234 ft: 15978 corp: 31/2738b lim: 120 exec/s: 53 rss: 74Mb L: 55/119 MS: 1 ChangeByte- 00:09:30.490 [2024-07-15 12:26:25.451666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13382931425288370617 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.490 [2024-07-15 12:26:25.451693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.490 [2024-07-15 12:26:25.451761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.490 [2024-07-15 12:26:25.451778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.490 [2024-07-15 12:26:25.451834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.490 [2024-07-15 12:26:25.451850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.490 [2024-07-15 12:26:25.451904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:30.490 [2024-07-15 12:26:25.451919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:30.490 #54 NEW cov: 12234 ft: 15986 corp: 32/2857b lim: 120 exec/s: 27 rss: 74Mb L: 119/119 MS: 1 ChangeBit- 00:09:30.490 #54 DONE cov: 12234 ft: 15986 corp: 32/2857b lim: 120 exec/s: 27 rss: 74Mb 00:09:30.490 ###### Recommended dictionary. ###### 00:09:30.490 "\001'\211\373\314\006\345\320" # Uses: 2 00:09:30.490 ###### End of recommended dictionary. ###### 00:09:30.490 Done 54 runs in 2 second(s) 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:30.748 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:30.749 12:26:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:09:30.749 [2024-07-15 12:26:25.672407] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:30.749 [2024-07-15 12:26:25.672484] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164360 ] 00:09:30.749 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.007 [2024-07-15 12:26:25.988496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.007 [2024-07-15 12:26:26.077560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.265 [2024-07-15 12:26:26.137072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.265 [2024-07-15 12:26:26.153263] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:09:31.265 INFO: Running with entropic power schedule (0xFF, 100). 00:09:31.265 INFO: Seed: 2017346067 00:09:31.265 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:31.265 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:31.265 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:31.265 INFO: A corpus is not provided, starting from an empty corpus 00:09:31.265 #2 INITED exec/s: 0 rss: 65Mb 00:09:31.265 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:31.265 This may also happen if the target rejected all inputs we tried so far 00:09:31.265 [2024-07-15 12:26:26.202080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.265 [2024-07-15 12:26:26.202112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.265 [2024-07-15 12:26:26.202149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.265 [2024-07-15 12:26:26.202163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.265 [2024-07-15 12:26:26.202215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:31.265 [2024-07-15 12:26:26.202230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.523 NEW_FUNC[1/695]: 0x4a15b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:09:31.523 NEW_FUNC[2/695]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:31.523 #32 NEW cov: 11933 ft: 11934 corp: 2/64b lim: 100 exec/s: 0 rss: 72Mb L: 63/63 MS: 5 ShuffleBytes-InsertByte-CopyPart-CopyPart-InsertRepeatedBytes- 00:09:31.523 [2024-07-15 12:26:26.532780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.523 [2024-07-15 12:26:26.532823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.523 [2024-07-15 12:26:26.532878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.523 [2024-07-15 12:26:26.532893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.523 #33 NEW cov: 12063 ft: 12789 corp: 3/107b lim: 100 exec/s: 0 rss: 72Mb L: 43/63 MS: 1 CrossOver- 00:09:31.523 [2024-07-15 12:26:26.572816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.523 [2024-07-15 12:26:26.572843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.523 [2024-07-15 12:26:26.572883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.523 [2024-07-15 12:26:26.572898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.523 #34 NEW cov: 12069 ft: 13053 corp: 4/148b lim: 100 exec/s: 0 rss: 72Mb L: 41/63 MS: 1 CrossOver- 00:09:31.523 [2024-07-15 12:26:26.622926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.523 [2024-07-15 12:26:26.622953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.523 [2024-07-15 12:26:26.622991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.523 [2024-07-15 12:26:26.623007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.523 #36 NEW cov: 12154 ft: 13483 corp: 5/196b lim: 100 exec/s: 0 rss: 72Mb L: 48/63 MS: 2 CrossOver-CrossOver- 00:09:31.783 [2024-07-15 12:26:26.663080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.783 [2024-07-15 12:26:26.663106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.663144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.783 [2024-07-15 12:26:26.663160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.783 #37 NEW cov: 12154 ft: 13524 corp: 6/252b lim: 100 exec/s: 0 rss: 72Mb L: 56/63 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\002"- 00:09:31.783 [2024-07-15 12:26:26.713305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.783 [2024-07-15 12:26:26.713330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.713377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.783 [2024-07-15 12:26:26.713392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.713446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:31.783 [2024-07-15 12:26:26.713461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.783 #38 NEW cov: 12154 ft: 13594 corp: 7/315b lim: 100 exec/s: 0 rss: 72Mb L: 63/63 MS: 1 ShuffleBytes- 00:09:31.783 [2024-07-15 12:26:26.753279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.783 [2024-07-15 12:26:26.753304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.753339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.783 [2024-07-15 12:26:26.753354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.783 #39 NEW cov: 12154 ft: 13700 corp: 8/359b lim: 100 exec/s: 0 rss: 72Mb L: 44/63 MS: 1 InsertByte- 00:09:31.783 [2024-07-15 12:26:26.803437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.783 [2024-07-15 12:26:26.803462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.803500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.783 [2024-07-15 12:26:26.803514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.783 #40 NEW cov: 12154 ft: 13835 corp: 9/415b lim: 100 exec/s: 0 rss: 72Mb L: 56/63 MS: 1 ChangeBit- 00:09:31.783 [2024-07-15 12:26:26.853567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.783 [2024-07-15 12:26:26.853593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.853635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.783 [2024-07-15 12:26:26.853650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.783 #41 NEW cov: 12154 ft: 13931 corp: 10/456b lim: 100 exec/s: 0 rss: 72Mb L: 41/63 MS: 1 CopyPart- 00:09:31.783 [2024-07-15 12:26:26.903710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:31.783 [2024-07-15 12:26:26.903736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.783 [2024-07-15 12:26:26.903774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:31.783 [2024-07-15 12:26:26.903792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.042 #42 NEW cov: 12154 ft: 13969 corp: 11/512b lim: 100 exec/s: 0 rss: 72Mb L: 56/63 MS: 1 ShuffleBytes- 00:09:32.042 [2024-07-15 12:26:26.943822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.042 [2024-07-15 12:26:26.943849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.042 [2024-07-15 12:26:26.943885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.042 [2024-07-15 12:26:26.943899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.042 #43 NEW cov: 12154 ft: 14020 corp: 12/561b lim: 100 exec/s: 0 rss: 72Mb L: 49/63 MS: 1 InsertByte- 00:09:32.042 [2024-07-15 12:26:26.983822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.042 [2024-07-15 12:26:26.983847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.042 #44 NEW cov: 12154 ft: 14358 corp: 13/596b lim: 100 exec/s: 0 rss: 72Mb L: 35/63 MS: 1 EraseBytes- 00:09:32.042 [2024-07-15 12:26:27.024289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.042 [2024-07-15 12:26:27.024314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.042 [2024-07-15 12:26:27.024370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.042 [2024-07-15 12:26:27.024385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.042 [2024-07-15 12:26:27.024437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:32.042 [2024-07-15 12:26:27.024453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.042 [2024-07-15 12:26:27.024505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:32.042 [2024-07-15 12:26:27.024520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:32.042 #45 NEW cov: 12154 ft: 14672 corp: 14/680b lim: 100 exec/s: 0 rss: 72Mb L: 84/84 MS: 1 CrossOver- 00:09:32.042 [2024-07-15 12:26:27.064145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.042 [2024-07-15 12:26:27.064170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.042 [2024-07-15 12:26:27.064207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.042 [2024-07-15 12:26:27.064219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.042 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:32.042 #51 NEW cov: 12177 ft: 14763 corp: 15/736b lim: 100 exec/s: 0 rss: 73Mb L: 56/84 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\002"- 00:09:32.042 [2024-07-15 12:26:27.114235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.042 [2024-07-15 12:26:27.114260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.042 #52 NEW cov: 12177 ft: 14773 corp: 16/766b lim: 100 exec/s: 0 rss: 73Mb L: 30/84 MS: 1 EraseBytes- 00:09:32.042 [2024-07-15 12:26:27.164364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.042 [2024-07-15 12:26:27.164391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.301 #53 NEW cov: 12177 ft: 14818 corp: 17/795b lim: 100 exec/s: 53 rss: 73Mb L: 29/84 MS: 1 EraseBytes- 00:09:32.301 [2024-07-15 12:26:27.204479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.301 [2024-07-15 12:26:27.204507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.301 #54 NEW cov: 12177 ft: 14866 corp: 18/825b lim: 100 exec/s: 54 rss: 73Mb L: 30/84 MS: 1 CopyPart- 00:09:32.301 [2024-07-15 12:26:27.254761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.301 [2024-07-15 12:26:27.254787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.301 [2024-07-15 12:26:27.254826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.301 [2024-07-15 12:26:27.254839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.301 #55 NEW cov: 12177 ft: 14894 corp: 19/866b lim: 100 exec/s: 55 rss: 73Mb L: 41/84 MS: 1 CrossOver- 00:09:32.301 [2024-07-15 12:26:27.294696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.301 [2024-07-15 12:26:27.294721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.301 #59 NEW cov: 12177 ft: 14911 corp: 20/898b lim: 100 exec/s: 59 rss: 73Mb L: 32/84 MS: 4 CrossOver-CrossOver-ChangeByte-CrossOver- 00:09:32.301 [2024-07-15 12:26:27.344995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.301 [2024-07-15 12:26:27.345022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.301 [2024-07-15 12:26:27.345071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.301 [2024-07-15 12:26:27.345086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.301 #60 NEW cov: 12177 ft: 14930 corp: 21/941b lim: 100 exec/s: 60 rss: 73Mb L: 43/84 MS: 1 CopyPart- 00:09:32.301 [2024-07-15 12:26:27.384966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.301 [2024-07-15 12:26:27.384992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.301 #61 NEW cov: 12177 ft: 14947 corp: 22/966b lim: 100 exec/s: 61 rss: 73Mb L: 25/84 MS: 1 EraseBytes- 00:09:32.560 [2024-07-15 12:26:27.435141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.560 [2024-07-15 12:26:27.435167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.560 #62 NEW cov: 12177 ft: 14963 corp: 23/995b lim: 100 exec/s: 62 rss: 73Mb L: 29/84 MS: 1 ChangeBit- 00:09:32.560 [2024-07-15 12:26:27.485289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.560 [2024-07-15 12:26:27.485316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.560 #63 NEW cov: 12177 ft: 14993 corp: 24/1018b lim: 100 exec/s: 63 rss: 73Mb L: 23/84 MS: 1 EraseBytes- 00:09:32.560 [2024-07-15 12:26:27.525512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.560 [2024-07-15 12:26:27.525544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.560 [2024-07-15 12:26:27.525582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.560 [2024-07-15 12:26:27.525597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.560 #64 NEW cov: 12177 ft: 15000 corp: 25/1059b lim: 100 exec/s: 64 rss: 73Mb L: 41/84 MS: 1 CopyPart- 00:09:32.560 [2024-07-15 12:26:27.575537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.560 [2024-07-15 12:26:27.575563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.560 #65 NEW cov: 12177 ft: 15036 corp: 26/1090b lim: 100 exec/s: 65 rss: 73Mb L: 31/84 MS: 1 InsertByte- 00:09:32.560 [2024-07-15 12:26:27.615738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.560 [2024-07-15 12:26:27.615763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.560 [2024-07-15 12:26:27.615800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.560 [2024-07-15 12:26:27.615815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.560 #66 NEW cov: 12177 ft: 15045 corp: 27/1131b lim: 100 exec/s: 66 rss: 73Mb L: 41/84 MS: 1 ChangeBit- 00:09:32.560 [2024-07-15 12:26:27.655771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.560 [2024-07-15 12:26:27.655795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.819 #67 NEW cov: 12177 ft: 15095 corp: 28/1155b lim: 100 exec/s: 67 rss: 73Mb L: 24/84 MS: 1 EraseBytes- 00:09:32.819 [2024-07-15 12:26:27.706201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.819 [2024-07-15 12:26:27.706226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.706282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.819 [2024-07-15 12:26:27.706297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.706348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:32.819 [2024-07-15 12:26:27.706363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.706417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:32.819 [2024-07-15 12:26:27.706431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:32.819 #68 NEW cov: 12177 ft: 15113 corp: 29/1239b lim: 100 exec/s: 68 rss: 74Mb L: 84/84 MS: 1 ShuffleBytes- 00:09:32.819 [2024-07-15 12:26:27.756485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.819 [2024-07-15 12:26:27.756511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.756569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.819 [2024-07-15 12:26:27.756584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.756638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:32.819 [2024-07-15 12:26:27.756653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.756706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:32.819 [2024-07-15 12:26:27.756721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.756773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:32.819 [2024-07-15 12:26:27.756788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:32.819 #69 NEW cov: 12177 ft: 15185 corp: 30/1339b lim: 100 exec/s: 69 rss: 74Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:32.819 [2024-07-15 12:26:27.806162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.819 [2024-07-15 12:26:27.806187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.819 #70 NEW cov: 12177 ft: 15212 corp: 31/1368b lim: 100 exec/s: 70 rss: 74Mb L: 29/100 MS: 1 CopyPart- 00:09:32.819 [2024-07-15 12:26:27.856408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.819 [2024-07-15 12:26:27.856433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.819 [2024-07-15 12:26:27.856478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:32.819 [2024-07-15 12:26:27.856493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.820 #71 NEW cov: 12177 ft: 15228 corp: 32/1417b lim: 100 exec/s: 71 rss: 74Mb L: 49/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\002"- 00:09:32.820 [2024-07-15 12:26:27.906419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:32.820 [2024-07-15 12:26:27.906444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.820 #72 NEW cov: 12177 ft: 15235 corp: 33/1446b lim: 100 exec/s: 72 rss: 74Mb L: 29/100 MS: 1 EraseBytes- 00:09:33.078 [2024-07-15 12:26:27.956992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:33.078 [2024-07-15 12:26:27.957018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:27.957069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:33.078 [2024-07-15 12:26:27.957085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:27.957138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:33.078 [2024-07-15 12:26:27.957153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:27.957205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:33.078 [2024-07-15 12:26:27.957221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:27.957272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:33.078 [2024-07-15 12:26:27.957287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:33.078 #73 NEW cov: 12177 ft: 15267 corp: 34/1546b lim: 100 exec/s: 73 rss: 74Mb L: 100/100 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\002"- 00:09:33.078 [2024-07-15 12:26:28.006999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:33.078 [2024-07-15 12:26:28.007025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:28.007080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:33.078 [2024-07-15 12:26:28.007093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:28.007146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:33.078 [2024-07-15 12:26:28.007162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:28.007218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:33.078 [2024-07-15 12:26:28.007233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:33.078 #74 NEW cov: 12177 ft: 15296 corp: 35/1627b lim: 100 exec/s: 74 rss: 74Mb L: 81/100 MS: 1 EraseBytes- 00:09:33.078 [2024-07-15 12:26:28.046909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:33.078 [2024-07-15 12:26:28.046935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.078 [2024-07-15 12:26:28.046986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:33.079 [2024-07-15 12:26:28.047000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.079 #75 NEW cov: 12177 ft: 15321 corp: 36/1679b lim: 100 exec/s: 75 rss: 74Mb L: 52/100 MS: 1 CopyPart- 00:09:33.079 [2024-07-15 12:26:28.086933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:33.079 [2024-07-15 12:26:28.086958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.079 #76 NEW cov: 12177 ft: 15392 corp: 37/1703b lim: 100 exec/s: 76 rss: 74Mb L: 24/100 MS: 1 ShuffleBytes- 00:09:33.079 [2024-07-15 12:26:28.137150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:33.079 [2024-07-15 12:26:28.137176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.079 [2024-07-15 12:26:28.137223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:33.079 [2024-07-15 12:26:28.137236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.079 #77 NEW cov: 12177 ft: 15415 corp: 38/1760b lim: 100 exec/s: 77 rss: 74Mb L: 57/100 MS: 1 CMP- DE: "n\260\017\\'\177\000\000"- 00:09:33.079 [2024-07-15 12:26:28.187296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:33.079 [2024-07-15 12:26:28.187321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.079 [2024-07-15 12:26:28.187358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:33.079 [2024-07-15 12:26:28.187373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.338 #78 NEW cov: 12177 ft: 15422 corp: 39/1809b lim: 100 exec/s: 39 rss: 74Mb L: 49/100 MS: 1 EraseBytes- 00:09:33.338 #78 DONE cov: 12177 ft: 15422 corp: 39/1809b lim: 100 exec/s: 39 rss: 74Mb 00:09:33.338 ###### Recommended dictionary. ###### 00:09:33.338 "\000\000\000\000\000\000\000\002" # Uses: 3 00:09:33.338 "n\260\017\\'\177\000\000" # Uses: 0 00:09:33.338 ###### End of recommended dictionary. ###### 00:09:33.338 Done 78 runs in 2 second(s) 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:33.338 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:33.339 12:26:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:09:33.339 [2024-07-15 12:26:28.385047] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:33.339 [2024-07-15 12:26:28.385125] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164722 ] 00:09:33.339 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.597 [2024-07-15 12:26:28.695939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.876 [2024-07-15 12:26:28.785574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.876 [2024-07-15 12:26:28.846137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.876 [2024-07-15 12:26:28.862316] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:33.876 INFO: Running with entropic power schedule (0xFF, 100). 00:09:33.876 INFO: Seed: 432378551 00:09:33.876 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:33.876 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:33.876 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:33.876 INFO: A corpus is not provided, starting from an empty corpus 00:09:33.877 #2 INITED exec/s: 0 rss: 65Mb 00:09:33.877 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:33.877 This may also happen if the target rejected all inputs we tried so far 00:09:33.877 [2024-07-15 12:26:28.907481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17980946133155291604 len:15678 00:09:33.877 [2024-07-15 12:26:28.907512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.135 NEW_FUNC[1/694]: 0x4a4570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:34.135 NEW_FUNC[2/694]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:34.135 #24 NEW cov: 11896 ft: 11909 corp: 2/14b lim: 50 exec/s: 0 rss: 71Mb L: 13/13 MS: 2 CMP-InsertRepeatedBytes- DE: "\260'\241\324\371\211'\000"- 00:09:34.135 [2024-07-15 12:26:29.238408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12693292009627558356 len:63882 00:09:34.135 [2024-07-15 12:26:29.238457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.393 NEW_FUNC[1/1]: 0x1361070 in nvmf_tcp_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3371 00:09:34.393 #30 NEW cov: 12041 ft: 12343 corp: 3/27b lim: 50 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 PersAutoDict- DE: "\260'\241\324\371\211'\000"- 00:09:34.393 [2024-07-15 12:26:29.298482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2821401626431431049 len:63882 00:09:34.393 [2024-07-15 12:26:29.298512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.393 #31 NEW cov: 12047 ft: 12618 corp: 4/40b lim: 50 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 CrossOver- 00:09:34.393 [2024-07-15 12:26:29.348630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12693292009627558356 len:63882 00:09:34.393 [2024-07-15 12:26:29.348660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.393 #32 NEW cov: 12132 ft: 13041 corp: 5/53b lim: 50 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 PersAutoDict- DE: "\260'\241\324\371\211'\000"- 00:09:34.393 [2024-07-15 12:26:29.388772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15325793107337881505 len:54522 00:09:34.393 [2024-07-15 12:26:29.388803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.393 #33 NEW cov: 12132 ft: 13169 corp: 6/67b lim: 50 exec/s: 0 rss: 72Mb L: 14/14 MS: 1 InsertByte- 00:09:34.393 [2024-07-15 12:26:29.428838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2955387273 len:1 00:09:34.393 [2024-07-15 12:26:29.428865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.393 #34 NEW cov: 12132 ft: 13236 corp: 7/80b lim: 50 exec/s: 0 rss: 73Mb L: 13/14 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:09:34.393 [2024-07-15 12:26:29.478988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15298965835368966049 len:11015 00:09:34.393 [2024-07-15 12:26:29.479016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.393 #35 NEW cov: 12132 ft: 13273 corp: 8/94b lim: 50 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 ChangeBinInt- 00:09:34.651 [2024-07-15 12:26:29.529137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2955387136 len:1 00:09:34.651 [2024-07-15 12:26:29.529165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.651 #36 NEW cov: 12132 ft: 13333 corp: 9/107b lim: 50 exec/s: 0 rss: 73Mb L: 13/14 MS: 1 CopyPart- 00:09:34.651 [2024-07-15 12:26:29.579517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:09:34.651 [2024-07-15 12:26:29.579547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.651 [2024-07-15 12:26:29.579596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:34.651 [2024-07-15 12:26:29.579613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.651 [2024-07-15 12:26:29.579666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:34.651 [2024-07-15 12:26:29.579682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.651 #38 NEW cov: 12132 ft: 13744 corp: 10/141b lim: 50 exec/s: 0 rss: 73Mb L: 34/34 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:34.651 [2024-07-15 12:26:29.619371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12685129235302949332 len:63882 00:09:34.651 [2024-07-15 12:26:29.619397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.651 #39 NEW cov: 12132 ft: 13858 corp: 11/154b lim: 50 exec/s: 0 rss: 73Mb L: 13/34 MS: 1 CrossOver- 00:09:34.651 [2024-07-15 12:26:29.669539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15298965835368965982 len:11015 00:09:34.651 [2024-07-15 12:26:29.669566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.651 #40 NEW cov: 12132 ft: 13884 corp: 12/168b lim: 50 exec/s: 0 rss: 73Mb L: 14/34 MS: 1 CopyPart- 00:09:34.651 [2024-07-15 12:26:29.719690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219704685012775 len:9985 00:09:34.651 [2024-07-15 12:26:29.719717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.651 #41 NEW cov: 12132 ft: 13907 corp: 13/179b lim: 50 exec/s: 0 rss: 73Mb L: 11/34 MS: 1 EraseBytes- 00:09:34.651 [2024-07-15 12:26:29.759810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:193681550213120 len:41354 00:09:34.651 [2024-07-15 12:26:29.759836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.908 #42 NEW cov: 12132 ft: 13928 corp: 14/198b lim: 50 exec/s: 0 rss: 73Mb L: 19/34 MS: 1 CrossOver- 00:09:34.908 [2024-07-15 12:26:29.799898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219704685012772 len:9985 00:09:34.908 [2024-07-15 12:26:29.799924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.908 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:34.908 #43 NEW cov: 12155 ft: 13955 corp: 15/209b lim: 50 exec/s: 0 rss: 73Mb L: 11/34 MS: 1 ChangeBinInt- 00:09:34.908 [2024-07-15 12:26:29.850055] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2955387136 len:1 00:09:34.908 [2024-07-15 12:26:29.850084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.908 #44 NEW cov: 12155 ft: 14003 corp: 16/222b lim: 50 exec/s: 0 rss: 73Mb L: 13/34 MS: 1 ChangeBinInt- 00:09:34.908 [2024-07-15 12:26:29.900331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12693292009627558356 len:63882 00:09:34.909 [2024-07-15 12:26:29.900359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.909 [2024-07-15 12:26:29.900398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744070085672959 len:65536 00:09:34.909 [2024-07-15 12:26:29.900414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.909 #45 NEW cov: 12155 ft: 14219 corp: 17/250b lim: 50 exec/s: 45 rss: 73Mb L: 28/34 MS: 1 InsertRepeatedBytes- 00:09:34.909 [2024-07-15 12:26:29.940288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15318801549120055201 len:24108 00:09:34.909 [2024-07-15 12:26:29.940316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.909 #46 NEW cov: 12155 ft: 14243 corp: 18/265b lim: 50 exec/s: 46 rss: 73Mb L: 15/34 MS: 1 InsertByte- 00:09:34.909 [2024-07-15 12:26:29.980423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219704676099876 len:9985 00:09:34.909 [2024-07-15 12:26:29.980451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.909 #47 NEW cov: 12155 ft: 14283 corp: 19/276b lim: 50 exec/s: 47 rss: 74Mb L: 11/34 MS: 1 CrossOver- 00:09:34.909 [2024-07-15 12:26:30.030601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:193681550213120 len:41354 00:09:34.909 [2024-07-15 12:26:30.030634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.167 #48 NEW cov: 12155 ft: 14318 corp: 20/295b lim: 50 exec/s: 48 rss: 74Mb L: 19/34 MS: 1 ChangeBit- 00:09:35.167 [2024-07-15 12:26:30.080749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17959511480389312980 len:15678 00:09:35.167 [2024-07-15 12:26:30.080785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.167 #49 NEW cov: 12155 ft: 14398 corp: 21/308b lim: 50 exec/s: 49 rss: 74Mb L: 13/34 MS: 1 ShuffleBytes- 00:09:35.167 [2024-07-15 12:26:30.120809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12685129235302949332 len:63882 00:09:35.167 [2024-07-15 12:26:30.120840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.167 #50 NEW cov: 12155 ft: 14413 corp: 22/322b lim: 50 exec/s: 50 rss: 74Mb L: 14/34 MS: 1 CrossOver- 00:09:35.167 [2024-07-15 12:26:30.170998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219640251590436 len:35112 00:09:35.167 [2024-07-15 12:26:30.171028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.167 #51 NEW cov: 12155 ft: 14495 corp: 23/334b lim: 50 exec/s: 51 rss: 74Mb L: 12/34 MS: 1 InsertByte- 00:09:35.167 [2024-07-15 12:26:30.221241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12693292009627558356 len:45058 00:09:35.167 [2024-07-15 12:26:30.221269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.167 [2024-07-15 12:26:30.221326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2855797805306467504 len:35112 00:09:35.167 [2024-07-15 12:26:30.221342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.167 #52 NEW cov: 12155 ft: 14521 corp: 24/360b lim: 50 exec/s: 52 rss: 74Mb L: 26/34 MS: 1 CrossOver- 00:09:35.167 [2024-07-15 12:26:30.261256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:12685129235302949332 len:63882 00:09:35.167 [2024-07-15 12:26:30.261283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.425 #53 NEW cov: 12155 ft: 14558 corp: 25/374b lim: 50 exec/s: 53 rss: 74Mb L: 14/34 MS: 1 ChangeByte- 00:09:35.425 [2024-07-15 12:26:30.311376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:74487328712139220 len:63882 00:09:35.425 [2024-07-15 12:26:30.311403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.425 #54 NEW cov: 12155 ft: 14615 corp: 26/387b lim: 50 exec/s: 54 rss: 74Mb L: 13/34 MS: 1 CMP- DE: "\001\010"- 00:09:35.425 [2024-07-15 12:26:30.351605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661111707986559271 len:55391 00:09:35.425 [2024-07-15 12:26:30.351632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.425 [2024-07-15 12:26:30.351671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:11661219702696736295 len:10046 00:09:35.425 [2024-07-15 12:26:30.351686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.425 #55 NEW cov: 12155 ft: 14625 corp: 27/412b lim: 50 exec/s: 55 rss: 74Mb L: 25/34 MS: 1 CrossOver- 00:09:35.425 [2024-07-15 12:26:30.391610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219704685012752 len:9985 00:09:35.425 [2024-07-15 12:26:30.391642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.425 #56 NEW cov: 12155 ft: 14680 corp: 28/423b lim: 50 exec/s: 56 rss: 74Mb L: 11/34 MS: 1 ChangeByte- 00:09:35.425 [2024-07-15 12:26:30.431685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45317474205761536 len:1 00:09:35.425 [2024-07-15 12:26:30.431712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.425 #57 NEW cov: 12155 ft: 14687 corp: 29/436b lim: 50 exec/s: 57 rss: 74Mb L: 13/34 MS: 1 ShuffleBytes- 00:09:35.425 [2024-07-15 12:26:30.472019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:138 00:09:35.425 [2024-07-15 12:26:30.472046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.425 [2024-07-15 12:26:30.472083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:655401428 len:1 00:09:35.425 [2024-07-15 12:26:30.472099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.425 [2024-07-15 12:26:30.472154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:35.425 [2024-07-15 12:26:30.472169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.425 #58 NEW cov: 12155 ft: 14719 corp: 30/475b lim: 50 exec/s: 58 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:09:35.425 [2024-07-15 12:26:30.521939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13765993145871868372 len:63882 00:09:35.425 [2024-07-15 12:26:30.521966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 #59 NEW cov: 12155 ft: 14801 corp: 31/489b lim: 50 exec/s: 59 rss: 74Mb L: 14/39 MS: 1 ChangeByte- 00:09:35.683 [2024-07-15 12:26:30.572118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4445096407613259069 len:54522 00:09:35.683 [2024-07-15 12:26:30.572143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 #60 NEW cov: 12155 ft: 14848 corp: 32/503b lim: 50 exec/s: 60 rss: 74Mb L: 14/39 MS: 1 CrossOver- 00:09:35.683 [2024-07-15 12:26:30.612206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13765993145871868372 len:11 00:09:35.683 [2024-07-15 12:26:30.612233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 #61 NEW cov: 12155 ft: 14859 corp: 33/514b lim: 50 exec/s: 61 rss: 74Mb L: 11/39 MS: 1 EraseBytes- 00:09:35.683 [2024-07-15 12:26:30.662343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219966678017831 len:9985 00:09:35.683 [2024-07-15 12:26:30.662369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 #62 NEW cov: 12155 ft: 14864 corp: 34/525b lim: 50 exec/s: 62 rss: 75Mb L: 11/39 MS: 1 ChangeByte- 00:09:35.683 [2024-07-15 12:26:30.702440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:45317474205761536 len:1 00:09:35.683 [2024-07-15 12:26:30.702467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 #63 NEW cov: 12155 ft: 14868 corp: 35/539b lim: 50 exec/s: 63 rss: 75Mb L: 14/39 MS: 1 InsertByte- 00:09:35.683 [2024-07-15 12:26:30.752612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219704676099876 len:10197 00:09:35.683 [2024-07-15 12:26:30.752639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 #65 NEW cov: 12155 ft: 14872 corp: 36/551b lim: 50 exec/s: 65 rss: 75Mb L: 12/39 MS: 2 EraseBytes-CopyPart- 00:09:35.683 [2024-07-15 12:26:30.793084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:138 00:09:35.683 [2024-07-15 12:26:30.793110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.683 [2024-07-15 12:26:30.793158] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:655401216 len:213 00:09:35.683 [2024-07-15 12:26:30.793174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.683 [2024-07-15 12:26:30.793226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:09:35.683 [2024-07-15 12:26:30.793241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.683 [2024-07-15 12:26:30.793294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:35.683 [2024-07-15 12:26:30.793310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.942 #66 NEW cov: 12155 ft: 15118 corp: 37/596b lim: 50 exec/s: 66 rss: 75Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:09:35.942 [2024-07-15 12:26:30.842862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:11661219704676230948 len:10197 00:09:35.942 [2024-07-15 12:26:30.842888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.942 #67 NEW cov: 12155 ft: 15138 corp: 38/608b lim: 50 exec/s: 67 rss: 75Mb L: 12/45 MS: 1 ChangeBinInt- 00:09:35.942 [2024-07-15 12:26:30.893372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7451037802321897319 len:26472 00:09:35.942 [2024-07-15 12:26:30.893399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.942 [2024-07-15 12:26:30.893464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:7451037802321897319 len:26472 00:09:35.942 [2024-07-15 12:26:30.893480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.942 [2024-07-15 12:26:30.893543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:11661111707986559271 len:55391 00:09:35.942 [2024-07-15 12:26:30.893560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.942 [2024-07-15 12:26:30.893616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:11661219702696736295 len:10046 00:09:35.942 [2024-07-15 12:26:30.893632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.942 #68 NEW cov: 12155 ft: 15150 corp: 39/653b lim: 50 exec/s: 34 rss: 75Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:09:35.942 #68 DONE cov: 12155 ft: 15150 corp: 39/653b lim: 50 exec/s: 34 rss: 75Mb 00:09:35.942 ###### Recommended dictionary. ###### 00:09:35.942 "\260'\241\324\371\211'\000" # Uses: 2 00:09:35.942 "\000\000\000\000\000\000\000\000" # Uses: 0 00:09:35.942 "\001\010" # Uses: 0 00:09:35.942 ###### End of recommended dictionary. ###### 00:09:35.942 Done 68 runs in 2 second(s) 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:35.942 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:36.201 12:26:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:09:36.201 [2024-07-15 12:26:31.115997] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:36.201 [2024-07-15 12:26:31.116073] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165077 ] 00:09:36.201 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.460 [2024-07-15 12:26:31.416803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.460 [2024-07-15 12:26:31.497355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.460 [2024-07-15 12:26:31.556916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.460 [2024-07-15 12:26:31.573096] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:36.460 INFO: Running with entropic power schedule (0xFF, 100). 00:09:36.460 INFO: Seed: 3143382522 00:09:36.718 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:36.718 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:36.718 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:36.718 INFO: A corpus is not provided, starting from an empty corpus 00:09:36.718 #2 INITED exec/s: 0 rss: 64Mb 00:09:36.718 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:36.718 This may also happen if the target rejected all inputs we tried so far 00:09:36.718 [2024-07-15 12:26:31.628541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:36.718 [2024-07-15 12:26:31.628573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.718 [2024-07-15 12:26:31.628632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:36.718 [2024-07-15 12:26:31.628646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.976 NEW_FUNC[1/697]: 0x4a6130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:36.976 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:36.976 #4 NEW cov: 11969 ft: 11970 corp: 2/54b lim: 90 exec/s: 0 rss: 72Mb L: 53/53 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:36.976 [2024-07-15 12:26:31.969688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:36.976 [2024-07-15 12:26:31.969745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:31.969812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:36.976 [2024-07-15 12:26:31.969834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:31.969897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:36.976 [2024-07-15 12:26:31.969918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:31.969982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:36.976 [2024-07-15 12:26:31.970002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.976 #5 NEW cov: 12099 ft: 12939 corp: 3/140b lim: 90 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:09:36.976 [2024-07-15 12:26:32.029679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:36.976 [2024-07-15 12:26:32.029713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:32.029751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:36.976 [2024-07-15 12:26:32.029768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:32.029819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:36.976 [2024-07-15 12:26:32.029835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:32.029887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:36.976 [2024-07-15 12:26:32.029903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.976 #6 NEW cov: 12105 ft: 13210 corp: 4/226b lim: 90 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 ChangeByte- 00:09:36.976 [2024-07-15 12:26:32.079805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:36.976 [2024-07-15 12:26:32.079832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.976 [2024-07-15 12:26:32.079874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:36.977 [2024-07-15 12:26:32.079888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.977 [2024-07-15 12:26:32.079939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:36.977 [2024-07-15 12:26:32.079954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.977 [2024-07-15 12:26:32.080006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:36.977 [2024-07-15 12:26:32.080022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.235 #7 NEW cov: 12190 ft: 13507 corp: 5/312b lim: 90 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 CopyPart- 00:09:37.235 [2024-07-15 12:26:32.129603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.235 [2024-07-15 12:26:32.129631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.235 #10 NEW cov: 12190 ft: 14432 corp: 6/342b lim: 90 exec/s: 0 rss: 72Mb L: 30/86 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:09:37.235 [2024-07-15 12:26:32.170086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.235 [2024-07-15 12:26:32.170113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.170158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.235 [2024-07-15 12:26:32.170173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.170224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.235 [2024-07-15 12:26:32.170239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.170291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.235 [2024-07-15 12:26:32.170306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.235 #11 NEW cov: 12190 ft: 14530 corp: 7/430b lim: 90 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 CMP- DE: "\000\000"- 00:09:37.235 [2024-07-15 12:26:32.209895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.235 [2024-07-15 12:26:32.209921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.209963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.235 [2024-07-15 12:26:32.209977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.235 #12 NEW cov: 12190 ft: 14641 corp: 8/483b lim: 90 exec/s: 0 rss: 72Mb L: 53/88 MS: 1 CMP- DE: "\004\000"- 00:09:37.235 [2024-07-15 12:26:32.250176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.235 [2024-07-15 12:26:32.250204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.250275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.235 [2024-07-15 12:26:32.250291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.235 #16 NEW cov: 12199 ft: 14737 corp: 9/534b lim: 90 exec/s: 0 rss: 72Mb L: 51/88 MS: 4 ChangeBit-InsertByte-CopyPart-InsertRepeatedBytes- 00:09:37.235 [2024-07-15 12:26:32.290389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.235 [2024-07-15 12:26:32.290416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.290461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.235 [2024-07-15 12:26:32.290477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.290537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.235 [2024-07-15 12:26:32.290556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.290607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.235 [2024-07-15 12:26:32.290625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.235 #17 NEW cov: 12199 ft: 14804 corp: 10/623b lim: 90 exec/s: 0 rss: 72Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:09:37.235 [2024-07-15 12:26:32.330461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.235 [2024-07-15 12:26:32.330488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.330538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.235 [2024-07-15 12:26:32.330554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.330604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.235 [2024-07-15 12:26:32.330620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.235 [2024-07-15 12:26:32.330671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.235 [2024-07-15 12:26:32.330686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.494 #18 NEW cov: 12199 ft: 14833 corp: 11/712b lim: 90 exec/s: 0 rss: 72Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:09:37.494 [2024-07-15 12:26:32.380644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.494 [2024-07-15 12:26:32.380671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.380713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.494 [2024-07-15 12:26:32.380729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.380780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.494 [2024-07-15 12:26:32.380794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.380846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.494 [2024-07-15 12:26:32.380861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.494 #19 NEW cov: 12199 ft: 14857 corp: 12/798b lim: 90 exec/s: 0 rss: 73Mb L: 86/89 MS: 1 ChangeBit- 00:09:37.494 [2024-07-15 12:26:32.420741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.494 [2024-07-15 12:26:32.420767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.420814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.494 [2024-07-15 12:26:32.420829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.420882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.494 [2024-07-15 12:26:32.420897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.420950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.494 [2024-07-15 12:26:32.420966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.494 #20 NEW cov: 12199 ft: 14937 corp: 13/884b lim: 90 exec/s: 0 rss: 73Mb L: 86/89 MS: 1 ShuffleBytes- 00:09:37.494 [2024-07-15 12:26:32.460890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.494 [2024-07-15 12:26:32.460918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.460958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.494 [2024-07-15 12:26:32.460974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.461026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.494 [2024-07-15 12:26:32.461041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.461093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.494 [2024-07-15 12:26:32.461107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.494 #21 NEW cov: 12199 ft: 14949 corp: 14/971b lim: 90 exec/s: 0 rss: 73Mb L: 87/89 MS: 1 CrossOver- 00:09:37.494 [2024-07-15 12:26:32.510715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.494 [2024-07-15 12:26:32.510743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.510783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.494 [2024-07-15 12:26:32.510798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.494 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:37.494 #22 NEW cov: 12222 ft: 14981 corp: 15/1024b lim: 90 exec/s: 0 rss: 73Mb L: 53/89 MS: 1 ChangeByte- 00:09:37.494 [2024-07-15 12:26:32.561162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.494 [2024-07-15 12:26:32.561190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.561233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.494 [2024-07-15 12:26:32.561249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.561303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.494 [2024-07-15 12:26:32.561318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.494 [2024-07-15 12:26:32.561370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.494 [2024-07-15 12:26:32.561386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.495 #23 NEW cov: 12222 ft: 14987 corp: 16/1110b lim: 90 exec/s: 0 rss: 73Mb L: 86/89 MS: 1 CrossOver- 00:09:37.495 [2024-07-15 12:26:32.611460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.495 [2024-07-15 12:26:32.611487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.495 [2024-07-15 12:26:32.611577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.495 [2024-07-15 12:26:32.611604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.495 [2024-07-15 12:26:32.611661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.495 [2024-07-15 12:26:32.611679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.495 [2024-07-15 12:26:32.611732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.495 [2024-07-15 12:26:32.611750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.495 [2024-07-15 12:26:32.611806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:37.495 [2024-07-15 12:26:32.611826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:37.753 #24 NEW cov: 12222 ft: 15067 corp: 17/1200b lim: 90 exec/s: 24 rss: 73Mb L: 90/90 MS: 1 CopyPart- 00:09:37.753 [2024-07-15 12:26:32.661395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.753 [2024-07-15 12:26:32.661423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.661477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.753 [2024-07-15 12:26:32.661494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.661552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.753 [2024-07-15 12:26:32.661569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.661621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.753 [2024-07-15 12:26:32.661637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.753 #25 NEW cov: 12222 ft: 15070 corp: 18/1286b lim: 90 exec/s: 25 rss: 73Mb L: 86/90 MS: 1 ChangeBinInt- 00:09:37.753 [2024-07-15 12:26:32.701535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.753 [2024-07-15 12:26:32.701563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.701606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.753 [2024-07-15 12:26:32.701622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.701674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.753 [2024-07-15 12:26:32.701689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.701742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.753 [2024-07-15 12:26:32.701758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.753 #26 NEW cov: 12222 ft: 15131 corp: 19/1363b lim: 90 exec/s: 26 rss: 73Mb L: 77/90 MS: 1 CrossOver- 00:09:37.753 [2024-07-15 12:26:32.751395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.753 [2024-07-15 12:26:32.751422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.751464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.753 [2024-07-15 12:26:32.751480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.753 #27 NEW cov: 12222 ft: 15166 corp: 20/1414b lim: 90 exec/s: 27 rss: 73Mb L: 51/90 MS: 1 ChangeBinInt- 00:09:37.753 [2024-07-15 12:26:32.801850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.753 [2024-07-15 12:26:32.801877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.801925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.753 [2024-07-15 12:26:32.801942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.802011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.753 [2024-07-15 12:26:32.802029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.802081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.753 [2024-07-15 12:26:32.802097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.753 #28 NEW cov: 12222 ft: 15177 corp: 21/1502b lim: 90 exec/s: 28 rss: 73Mb L: 88/90 MS: 1 CopyPart- 00:09:37.753 [2024-07-15 12:26:32.841895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:37.753 [2024-07-15 12:26:32.841922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.753 [2024-07-15 12:26:32.841983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:37.753 [2024-07-15 12:26:32.842000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.754 [2024-07-15 12:26:32.842052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:37.754 [2024-07-15 12:26:32.842068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.754 [2024-07-15 12:26:32.842121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:37.754 [2024-07-15 12:26:32.842137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.754 #29 NEW cov: 12222 ft: 15182 corp: 22/1588b lim: 90 exec/s: 29 rss: 73Mb L: 86/90 MS: 1 CopyPart- 00:09:37.754 [2024-07-15 12:26:32.881722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.012 [2024-07-15 12:26:32.881749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.881806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.012 [2024-07-15 12:26:32.881822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.881876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.012 [2024-07-15 12:26:32.881892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.012 #33 NEW cov: 12222 ft: 15493 corp: 23/1659b lim: 90 exec/s: 33 rss: 73Mb L: 71/90 MS: 4 CopyPart-ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:09:38.012 [2024-07-15 12:26:32.922126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.012 [2024-07-15 12:26:32.922152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.922213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.012 [2024-07-15 12:26:32.922235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.922284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.012 [2024-07-15 12:26:32.922299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.922351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.012 [2024-07-15 12:26:32.922367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.012 #34 NEW cov: 12222 ft: 15503 corp: 24/1736b lim: 90 exec/s: 34 rss: 73Mb L: 77/90 MS: 1 PersAutoDict- DE: "\004\000"- 00:09:38.012 [2024-07-15 12:26:32.972272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.012 [2024-07-15 12:26:32.972297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.972343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.012 [2024-07-15 12:26:32.972359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.972408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.012 [2024-07-15 12:26:32.972424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:32.972476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.012 [2024-07-15 12:26:32.972490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.012 #35 NEW cov: 12222 ft: 15526 corp: 25/1822b lim: 90 exec/s: 35 rss: 73Mb L: 86/90 MS: 1 ChangeByte- 00:09:38.012 [2024-07-15 12:26:33.012404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.012 [2024-07-15 12:26:33.012430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:33.012479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.012 [2024-07-15 12:26:33.012495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:33.012550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.012 [2024-07-15 12:26:33.012565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.012 [2024-07-15 12:26:33.012617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.012 [2024-07-15 12:26:33.012632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.013 #36 NEW cov: 12222 ft: 15541 corp: 26/1900b lim: 90 exec/s: 36 rss: 73Mb L: 78/90 MS: 1 InsertByte- 00:09:38.013 [2024-07-15 12:26:33.062530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.013 [2024-07-15 12:26:33.062557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.013 [2024-07-15 12:26:33.062624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.013 [2024-07-15 12:26:33.062640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.013 [2024-07-15 12:26:33.062693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.013 [2024-07-15 12:26:33.062710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.013 [2024-07-15 12:26:33.062759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.013 [2024-07-15 12:26:33.062775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.013 #37 NEW cov: 12222 ft: 15550 corp: 27/1988b lim: 90 exec/s: 37 rss: 73Mb L: 88/90 MS: 1 ShuffleBytes- 00:09:38.013 [2024-07-15 12:26:33.112703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.013 [2024-07-15 12:26:33.112729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.013 [2024-07-15 12:26:33.112770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.013 [2024-07-15 12:26:33.112787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.013 [2024-07-15 12:26:33.112838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.013 [2024-07-15 12:26:33.112854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.013 [2024-07-15 12:26:33.112907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.013 [2024-07-15 12:26:33.112922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.286 #38 NEW cov: 12222 ft: 15570 corp: 28/2077b lim: 90 exec/s: 38 rss: 73Mb L: 89/90 MS: 1 InsertByte- 00:09:38.286 [2024-07-15 12:26:33.162684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.286 [2024-07-15 12:26:33.162710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.162772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.286 [2024-07-15 12:26:33.162788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.162841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.286 [2024-07-15 12:26:33.162857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.286 #39 NEW cov: 12222 ft: 15585 corp: 29/2132b lim: 90 exec/s: 39 rss: 73Mb L: 55/90 MS: 1 PersAutoDict- DE: "\004\000"- 00:09:38.286 [2024-07-15 12:26:33.202964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.286 [2024-07-15 12:26:33.202991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.203036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.286 [2024-07-15 12:26:33.203051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.203104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.286 [2024-07-15 12:26:33.203120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.203174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.286 [2024-07-15 12:26:33.203189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.286 #40 NEW cov: 12222 ft: 15614 corp: 30/2221b lim: 90 exec/s: 40 rss: 73Mb L: 89/90 MS: 1 InsertByte- 00:09:38.286 [2024-07-15 12:26:33.243068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.286 [2024-07-15 12:26:33.243094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.243157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.286 [2024-07-15 12:26:33.243173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.243227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.286 [2024-07-15 12:26:33.243242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.243295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.286 [2024-07-15 12:26:33.243310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.286 #45 NEW cov: 12222 ft: 15623 corp: 31/2310b lim: 90 exec/s: 45 rss: 74Mb L: 89/90 MS: 5 CrossOver-InsertByte-ChangeByte-PersAutoDict-InsertRepeatedBytes- DE: "\000\000"- 00:09:38.286 [2024-07-15 12:26:33.292914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.286 [2024-07-15 12:26:33.292940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.292991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.286 [2024-07-15 12:26:33.293006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.286 #46 NEW cov: 12222 ft: 15633 corp: 32/2363b lim: 90 exec/s: 46 rss: 74Mb L: 53/90 MS: 1 ChangeBit- 00:09:38.286 [2024-07-15 12:26:33.333059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.286 [2024-07-15 12:26:33.333086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.333142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.286 [2024-07-15 12:26:33.333158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.383142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.286 [2024-07-15 12:26:33.383168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.286 [2024-07-15 12:26:33.383205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.286 [2024-07-15 12:26:33.383220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.286 #48 NEW cov: 12222 ft: 15723 corp: 33/2414b lim: 90 exec/s: 48 rss: 74Mb L: 51/90 MS: 2 ShuffleBytes-ChangeBit- 00:09:38.547 [2024-07-15 12:26:33.423657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.547 [2024-07-15 12:26:33.423685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.423728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.547 [2024-07-15 12:26:33.423743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.423797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.547 [2024-07-15 12:26:33.423811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.423865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.547 [2024-07-15 12:26:33.423881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.547 #49 NEW cov: 12222 ft: 15744 corp: 34/2492b lim: 90 exec/s: 49 rss: 74Mb L: 78/90 MS: 1 ChangeBinInt- 00:09:38.547 [2024-07-15 12:26:33.473881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.547 [2024-07-15 12:26:33.473909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.473955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.547 [2024-07-15 12:26:33.473971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.474024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.547 [2024-07-15 12:26:33.474039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.474093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.547 [2024-07-15 12:26:33.474108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.474160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:09:38.547 [2024-07-15 12:26:33.474175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:38.547 #50 NEW cov: 12222 ft: 15760 corp: 35/2582b lim: 90 exec/s: 50 rss: 74Mb L: 90/90 MS: 1 InsertByte- 00:09:38.547 [2024-07-15 12:26:33.523858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.547 [2024-07-15 12:26:33.523885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.547 [2024-07-15 12:26:33.523934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.547 [2024-07-15 12:26:33.523950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.524001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.548 [2024-07-15 12:26:33.524016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.524067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.548 [2024-07-15 12:26:33.524083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.548 #51 NEW cov: 12222 ft: 15763 corp: 36/2668b lim: 90 exec/s: 51 rss: 74Mb L: 86/90 MS: 1 ChangeByte- 00:09:38.548 [2024-07-15 12:26:33.564038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.548 [2024-07-15 12:26:33.564064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.564124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.548 [2024-07-15 12:26:33.564140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.564198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.548 [2024-07-15 12:26:33.564214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.564266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.548 [2024-07-15 12:26:33.564282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.548 #52 NEW cov: 12222 ft: 15766 corp: 37/2746b lim: 90 exec/s: 52 rss: 74Mb L: 78/90 MS: 1 CrossOver- 00:09:38.548 [2024-07-15 12:26:33.604112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:38.548 [2024-07-15 12:26:33.604138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.604214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:38.548 [2024-07-15 12:26:33.604245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.604299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:38.548 [2024-07-15 12:26:33.604315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.548 [2024-07-15 12:26:33.604367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:38.548 [2024-07-15 12:26:33.604384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.548 #53 NEW cov: 12222 ft: 15778 corp: 38/2825b lim: 90 exec/s: 26 rss: 74Mb L: 79/90 MS: 1 InsertByte- 00:09:38.548 #53 DONE cov: 12222 ft: 15778 corp: 38/2825b lim: 90 exec/s: 26 rss: 74Mb 00:09:38.548 ###### Recommended dictionary. ###### 00:09:38.548 "\000\000" # Uses: 1 00:09:38.548 "\004\000" # Uses: 2 00:09:38.548 ###### End of recommended dictionary. ###### 00:09:38.548 Done 53 runs in 2 second(s) 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:38.806 12:26:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:09:38.806 [2024-07-15 12:26:33.810110] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:38.806 [2024-07-15 12:26:33.810191] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165439 ] 00:09:38.806 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.062 [2024-07-15 12:26:34.118824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.320 [2024-07-15 12:26:34.199767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.320 [2024-07-15 12:26:34.259344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.320 [2024-07-15 12:26:34.275554] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:39.320 INFO: Running with entropic power schedule (0xFF, 100). 00:09:39.320 INFO: Seed: 1551427148 00:09:39.320 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:39.320 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:39.320 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:39.320 INFO: A corpus is not provided, starting from an empty corpus 00:09:39.320 #2 INITED exec/s: 0 rss: 65Mb 00:09:39.320 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:39.320 This may also happen if the target rejected all inputs we tried so far 00:09:39.320 [2024-07-15 12:26:34.320339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:39.320 [2024-07-15 12:26:34.320376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.320 [2024-07-15 12:26:34.320429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:39.320 [2024-07-15 12:26:34.320448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.578 NEW_FUNC[1/697]: 0x4a9350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:39.578 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:39.578 #8 NEW cov: 11944 ft: 11944 corp: 2/27b lim: 50 exec/s: 0 rss: 71Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:09:39.578 [2024-07-15 12:26:34.671195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:39.578 [2024-07-15 12:26:34.671248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.578 [2024-07-15 12:26:34.671302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:39.578 [2024-07-15 12:26:34.671319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.836 #9 NEW cov: 12074 ft: 12503 corp: 3/53b lim: 50 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeBit- 00:09:39.836 [2024-07-15 12:26:34.751281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:39.836 [2024-07-15 12:26:34.751314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.751366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:39.836 [2024-07-15 12:26:34.751389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.836 #10 NEW cov: 12080 ft: 12700 corp: 4/79b lim: 50 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ShuffleBytes- 00:09:39.836 [2024-07-15 12:26:34.801417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:39.836 [2024-07-15 12:26:34.801450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.801485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:39.836 [2024-07-15 12:26:34.801503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.836 #11 NEW cov: 12165 ft: 13017 corp: 5/105b lim: 50 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 ChangeByte- 00:09:39.836 [2024-07-15 12:26:34.881728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:39.836 [2024-07-15 12:26:34.881759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.881793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:39.836 [2024-07-15 12:26:34.881811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.881842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:39.836 [2024-07-15 12:26:34.881859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.836 #12 NEW cov: 12165 ft: 13443 corp: 6/135b lim: 50 exec/s: 0 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:09:39.836 [2024-07-15 12:26:34.941913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:39.836 [2024-07-15 12:26:34.941944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.941978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:39.836 [2024-07-15 12:26:34.941996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.942043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:39.836 [2024-07-15 12:26:34.942060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.836 [2024-07-15 12:26:34.942090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:39.836 [2024-07-15 12:26:34.942108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.094 #13 NEW cov: 12165 ft: 13855 corp: 7/183b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 CrossOver- 00:09:40.094 [2024-07-15 12:26:35.022010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.094 [2024-07-15 12:26:35.022043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.094 [2024-07-15 12:26:35.022095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.094 [2024-07-15 12:26:35.022117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.094 #14 NEW cov: 12165 ft: 13889 corp: 8/209b lim: 50 exec/s: 0 rss: 72Mb L: 26/48 MS: 1 ShuffleBytes- 00:09:40.094 [2024-07-15 12:26:35.082247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.094 [2024-07-15 12:26:35.082277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.094 [2024-07-15 12:26:35.082330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.094 [2024-07-15 12:26:35.082349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.094 [2024-07-15 12:26:35.082380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.094 [2024-07-15 12:26:35.082396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.094 [2024-07-15 12:26:35.082426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:40.094 [2024-07-15 12:26:35.082442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.094 #15 NEW cov: 12165 ft: 13913 corp: 9/257b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 ChangeByte- 00:09:40.094 [2024-07-15 12:26:35.162348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.094 [2024-07-15 12:26:35.162381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.094 [2024-07-15 12:26:35.162416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.094 [2024-07-15 12:26:35.162435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.094 #18 NEW cov: 12165 ft: 13948 corp: 10/282b lim: 50 exec/s: 0 rss: 72Mb L: 25/48 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:09:40.094 [2024-07-15 12:26:35.212434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.094 [2024-07-15 12:26:35.212464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.094 [2024-07-15 12:26:35.212514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.094 [2024-07-15 12:26:35.212541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.352 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:40.352 #19 NEW cov: 12182 ft: 14005 corp: 11/308b lim: 50 exec/s: 0 rss: 72Mb L: 26/48 MS: 1 ChangeByte- 00:09:40.352 [2024-07-15 12:26:35.272614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.352 [2024-07-15 12:26:35.272647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.352 #20 NEW cov: 12182 ft: 14795 corp: 12/324b lim: 50 exec/s: 20 rss: 72Mb L: 16/48 MS: 1 EraseBytes- 00:09:40.352 [2024-07-15 12:26:35.342798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.352 [2024-07-15 12:26:35.342829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.352 [2024-07-15 12:26:35.342880] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.352 [2024-07-15 12:26:35.342898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.352 #21 NEW cov: 12182 ft: 14864 corp: 13/350b lim: 50 exec/s: 21 rss: 72Mb L: 26/48 MS: 1 ChangeByte- 00:09:40.352 [2024-07-15 12:26:35.422990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.352 [2024-07-15 12:26:35.423021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.352 [2024-07-15 12:26:35.423055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.352 [2024-07-15 12:26:35.423077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.611 #22 NEW cov: 12182 ft: 14932 corp: 14/376b lim: 50 exec/s: 22 rss: 72Mb L: 26/48 MS: 1 ChangeBit- 00:09:40.611 [2024-07-15 12:26:35.503199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.611 [2024-07-15 12:26:35.503230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.611 [2024-07-15 12:26:35.503281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.611 [2024-07-15 12:26:35.503299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.611 #23 NEW cov: 12182 ft: 14957 corp: 15/402b lim: 50 exec/s: 23 rss: 73Mb L: 26/48 MS: 1 CrossOver- 00:09:40.611 [2024-07-15 12:26:35.583459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.611 [2024-07-15 12:26:35.583490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.611 [2024-07-15 12:26:35.583545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.611 [2024-07-15 12:26:35.583564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.611 [2024-07-15 12:26:35.583595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.611 [2024-07-15 12:26:35.583611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.611 #24 NEW cov: 12182 ft: 14997 corp: 16/438b lim: 50 exec/s: 24 rss: 73Mb L: 36/48 MS: 1 CopyPart- 00:09:40.611 [2024-07-15 12:26:35.643621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.611 [2024-07-15 12:26:35.643650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.611 [2024-07-15 12:26:35.643700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.611 [2024-07-15 12:26:35.643718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.611 [2024-07-15 12:26:35.643749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.611 [2024-07-15 12:26:35.643765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.611 #25 NEW cov: 12182 ft: 15056 corp: 17/470b lim: 50 exec/s: 25 rss: 73Mb L: 32/48 MS: 1 InsertRepeatedBytes- 00:09:40.611 [2024-07-15 12:26:35.724486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.611 [2024-07-15 12:26:35.724515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.611 [2024-07-15 12:26:35.724578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.611 [2024-07-15 12:26:35.724595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.870 #26 NEW cov: 12182 ft: 15236 corp: 18/496b lim: 50 exec/s: 26 rss: 73Mb L: 26/48 MS: 1 ChangeBit- 00:09:40.870 [2024-07-15 12:26:35.774911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.870 [2024-07-15 12:26:35.774939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.774987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.870 [2024-07-15 12:26:35.775007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.775076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.870 [2024-07-15 12:26:35.775093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.775153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:40.870 [2024-07-15 12:26:35.775173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.870 #27 NEW cov: 12182 ft: 15342 corp: 19/544b lim: 50 exec/s: 27 rss: 73Mb L: 48/48 MS: 1 ShuffleBytes- 00:09:40.870 [2024-07-15 12:26:35.814789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.870 [2024-07-15 12:26:35.814817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.814861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.870 [2024-07-15 12:26:35.814878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.870 #28 NEW cov: 12182 ft: 15405 corp: 20/570b lim: 50 exec/s: 28 rss: 73Mb L: 26/48 MS: 1 CopyPart- 00:09:40.870 [2024-07-15 12:26:35.854879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.870 [2024-07-15 12:26:35.854906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.854961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.870 [2024-07-15 12:26:35.854977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.870 #29 NEW cov: 12182 ft: 15502 corp: 21/596b lim: 50 exec/s: 29 rss: 73Mb L: 26/48 MS: 1 CrossOver- 00:09:40.870 [2024-07-15 12:26:35.905326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.870 [2024-07-15 12:26:35.905353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.905401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.870 [2024-07-15 12:26:35.905418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.905476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.870 [2024-07-15 12:26:35.905492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.905554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:40.870 [2024-07-15 12:26:35.905571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.870 #30 NEW cov: 12182 ft: 15516 corp: 22/644b lim: 50 exec/s: 30 rss: 73Mb L: 48/48 MS: 1 ChangeBit- 00:09:40.870 [2024-07-15 12:26:35.955255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.870 [2024-07-15 12:26:35.955281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.870 [2024-07-15 12:26:35.955338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.871 [2024-07-15 12:26:35.955354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.871 [2024-07-15 12:26:35.955413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.871 [2024-07-15 12:26:35.955430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.871 #31 NEW cov: 12182 ft: 15552 corp: 23/674b lim: 50 exec/s: 31 rss: 73Mb L: 30/48 MS: 1 ChangeBit- 00:09:40.871 [2024-07-15 12:26:35.995637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:40.871 [2024-07-15 12:26:35.995664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.871 [2024-07-15 12:26:35.995713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:40.871 [2024-07-15 12:26:35.995729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.871 [2024-07-15 12:26:35.995786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:40.871 [2024-07-15 12:26:35.995802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.871 [2024-07-15 12:26:35.995871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:40.871 [2024-07-15 12:26:35.995886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.129 #32 NEW cov: 12182 ft: 15563 corp: 24/722b lim: 50 exec/s: 32 rss: 73Mb L: 48/48 MS: 1 CopyPart- 00:09:41.129 [2024-07-15 12:26:36.035374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.129 [2024-07-15 12:26:36.035400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.035445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:41.129 [2024-07-15 12:26:36.035461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.129 #33 NEW cov: 12182 ft: 15623 corp: 25/748b lim: 50 exec/s: 33 rss: 73Mb L: 26/48 MS: 1 ShuffleBytes- 00:09:41.129 [2024-07-15 12:26:36.075489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.129 [2024-07-15 12:26:36.075516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.075559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:41.129 [2024-07-15 12:26:36.075576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.129 #34 NEW cov: 12182 ft: 15659 corp: 26/774b lim: 50 exec/s: 34 rss: 73Mb L: 26/48 MS: 1 ChangeByte- 00:09:41.129 [2024-07-15 12:26:36.115907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.129 [2024-07-15 12:26:36.115934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.115987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:41.129 [2024-07-15 12:26:36.116003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.116058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:41.129 [2024-07-15 12:26:36.116074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.116131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:41.129 [2024-07-15 12:26:36.116149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.129 #35 NEW cov: 12182 ft: 15698 corp: 27/822b lim: 50 exec/s: 35 rss: 73Mb L: 48/48 MS: 1 CrossOver- 00:09:41.129 [2024-07-15 12:26:36.166053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.129 [2024-07-15 12:26:36.166078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.166143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:41.129 [2024-07-15 12:26:36.166158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.166213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:41.129 [2024-07-15 12:26:36.166228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.166286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:41.129 [2024-07-15 12:26:36.166303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:41.129 #36 NEW cov: 12182 ft: 15732 corp: 28/868b lim: 50 exec/s: 36 rss: 73Mb L: 46/48 MS: 1 InsertRepeatedBytes- 00:09:41.129 [2024-07-15 12:26:36.205851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.129 [2024-07-15 12:26:36.205877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.129 [2024-07-15 12:26:36.205932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:41.129 [2024-07-15 12:26:36.205949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.129 #37 NEW cov: 12189 ft: 15750 corp: 29/895b lim: 50 exec/s: 37 rss: 73Mb L: 27/48 MS: 1 InsertByte- 00:09:41.129 [2024-07-15 12:26:36.245828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.129 [2024-07-15 12:26:36.245854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.388 #38 NEW cov: 12189 ft: 15775 corp: 30/913b lim: 50 exec/s: 38 rss: 73Mb L: 18/48 MS: 1 EraseBytes- 00:09:41.388 [2024-07-15 12:26:36.296096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:41.388 [2024-07-15 12:26:36.296123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.388 [2024-07-15 12:26:36.296191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:41.388 [2024-07-15 12:26:36.296206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:41.388 #39 NEW cov: 12189 ft: 15787 corp: 31/940b lim: 50 exec/s: 19 rss: 73Mb L: 27/48 MS: 1 CopyPart- 00:09:41.388 #39 DONE cov: 12189 ft: 15787 corp: 31/940b lim: 50 exec/s: 19 rss: 73Mb 00:09:41.388 Done 39 runs in 2 second(s) 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:41.388 12:26:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:09:41.388 [2024-07-15 12:26:36.498444] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:41.388 [2024-07-15 12:26:36.498535] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165794 ] 00:09:41.645 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.903 [2024-07-15 12:26:36.792258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.903 [2024-07-15 12:26:36.879326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.903 [2024-07-15 12:26:36.938807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.903 [2024-07-15 12:26:36.955003] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:41.903 INFO: Running with entropic power schedule (0xFF, 100). 00:09:41.903 INFO: Seed: 4230432419 00:09:41.903 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:41.903 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:41.903 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:41.903 INFO: A corpus is not provided, starting from an empty corpus 00:09:41.903 #2 INITED exec/s: 0 rss: 64Mb 00:09:41.903 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:41.903 This may also happen if the target rejected all inputs we tried so far 00:09:41.903 [2024-07-15 12:26:37.010570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:41.903 [2024-07-15 12:26:37.010604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:41.903 [2024-07-15 12:26:37.010672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:41.903 [2024-07-15 12:26:37.010689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.418 NEW_FUNC[1/697]: 0x4ab610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:42.418 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:42.418 #8 NEW cov: 11963 ft: 11969 corp: 2/50b lim: 85 exec/s: 0 rss: 72Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:09:42.418 [2024-07-15 12:26:37.352862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.418 [2024-07-15 12:26:37.352920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.418 [2024-07-15 12:26:37.353023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.418 [2024-07-15 12:26:37.353051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.418 #9 NEW cov: 12100 ft: 12476 corp: 3/99b lim: 85 exec/s: 0 rss: 72Mb L: 49/49 MS: 1 ChangeBit- 00:09:42.418 [2024-07-15 12:26:37.422900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.418 [2024-07-15 12:26:37.422929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.418 [2024-07-15 12:26:37.423031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.418 [2024-07-15 12:26:37.423046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.418 #10 NEW cov: 12106 ft: 12819 corp: 4/149b lim: 85 exec/s: 0 rss: 72Mb L: 50/50 MS: 1 InsertByte- 00:09:42.418 [2024-07-15 12:26:37.473189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.418 [2024-07-15 12:26:37.473216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.418 [2024-07-15 12:26:37.473316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.418 [2024-07-15 12:26:37.473334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.418 #11 NEW cov: 12191 ft: 13113 corp: 5/198b lim: 85 exec/s: 0 rss: 72Mb L: 49/50 MS: 1 CrossOver- 00:09:42.418 [2024-07-15 12:26:37.523338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.418 [2024-07-15 12:26:37.523365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.418 [2024-07-15 12:26:37.523434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.418 [2024-07-15 12:26:37.523453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.418 #13 NEW cov: 12191 ft: 13175 corp: 6/235b lim: 85 exec/s: 0 rss: 72Mb L: 37/50 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:42.675 [2024-07-15 12:26:37.574440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.675 [2024-07-15 12:26:37.574469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.574560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.675 [2024-07-15 12:26:37.574578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.574655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:42.675 [2024-07-15 12:26:37.574673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.574762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:42.675 [2024-07-15 12:26:37.574782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.574869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:09:42.675 [2024-07-15 12:26:37.574890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:42.675 #14 NEW cov: 12191 ft: 13666 corp: 7/320b lim: 85 exec/s: 0 rss: 72Mb L: 85/85 MS: 1 CopyPart- 00:09:42.675 [2024-07-15 12:26:37.634024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.675 [2024-07-15 12:26:37.634052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.634141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.675 [2024-07-15 12:26:37.634159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.634225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:42.675 [2024-07-15 12:26:37.634243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.675 #15 NEW cov: 12191 ft: 13972 corp: 8/387b lim: 85 exec/s: 0 rss: 72Mb L: 67/85 MS: 1 InsertRepeatedBytes- 00:09:42.675 [2024-07-15 12:26:37.684427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.675 [2024-07-15 12:26:37.684455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.684533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.675 [2024-07-15 12:26:37.684553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.684611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:42.675 [2024-07-15 12:26:37.684627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.684718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:42.675 [2024-07-15 12:26:37.684739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.675 #16 NEW cov: 12191 ft: 13997 corp: 9/463b lim: 85 exec/s: 0 rss: 72Mb L: 76/85 MS: 1 InsertRepeatedBytes- 00:09:42.675 [2024-07-15 12:26:37.754086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.675 [2024-07-15 12:26:37.754114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.675 [2024-07-15 12:26:37.754195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.675 [2024-07-15 12:26:37.754214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.675 #17 NEW cov: 12191 ft: 14055 corp: 10/513b lim: 85 exec/s: 0 rss: 72Mb L: 50/85 MS: 1 CopyPart- 00:09:42.933 [2024-07-15 12:26:37.814431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.933 [2024-07-15 12:26:37.814459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:37.814522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.933 [2024-07-15 12:26:37.814547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.933 #18 NEW cov: 12191 ft: 14115 corp: 11/562b lim: 85 exec/s: 0 rss: 72Mb L: 49/85 MS: 1 ChangeBit- 00:09:42.933 [2024-07-15 12:26:37.874620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.933 [2024-07-15 12:26:37.874652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:37.874725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.933 [2024-07-15 12:26:37.874744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.933 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:42.933 #19 NEW cov: 12214 ft: 14162 corp: 12/611b lim: 85 exec/s: 0 rss: 72Mb L: 49/85 MS: 1 ChangeBinInt- 00:09:42.933 [2024-07-15 12:26:37.924834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.933 [2024-07-15 12:26:37.924863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:37.924952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.933 [2024-07-15 12:26:37.924972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.933 #20 NEW cov: 12214 ft: 14198 corp: 13/649b lim: 85 exec/s: 0 rss: 72Mb L: 38/85 MS: 1 InsertByte- 00:09:42.933 [2024-07-15 12:26:37.975657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.933 [2024-07-15 12:26:37.975687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:37.975756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.933 [2024-07-15 12:26:37.975772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:37.975842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:42.933 [2024-07-15 12:26:37.975864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:37.975948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:42.933 [2024-07-15 12:26:37.975968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.933 #30 NEW cov: 12214 ft: 14222 corp: 14/717b lim: 85 exec/s: 30 rss: 73Mb L: 68/85 MS: 5 CrossOver-ChangeBit-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:09:42.933 [2024-07-15 12:26:38.025186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:42.933 [2024-07-15 12:26:38.025214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.933 [2024-07-15 12:26:38.025277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:42.933 [2024-07-15 12:26:38.025294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.933 #31 NEW cov: 12214 ft: 14266 corp: 15/766b lim: 85 exec/s: 31 rss: 73Mb L: 49/85 MS: 1 ShuffleBytes- 00:09:43.190 [2024-07-15 12:26:38.075834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.190 [2024-07-15 12:26:38.075863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.075932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.190 [2024-07-15 12:26:38.075951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.076009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.190 [2024-07-15 12:26:38.076030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.190 #32 NEW cov: 12214 ft: 14301 corp: 16/820b lim: 85 exec/s: 32 rss: 73Mb L: 54/85 MS: 1 InsertRepeatedBytes- 00:09:43.190 [2024-07-15 12:26:38.136593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.190 [2024-07-15 12:26:38.136620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.136701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.190 [2024-07-15 12:26:38.136719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.136800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.190 [2024-07-15 12:26:38.136817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.136898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:43.190 [2024-07-15 12:26:38.136916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.137008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:09:43.190 [2024-07-15 12:26:38.137029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:43.190 #33 NEW cov: 12214 ft: 14321 corp: 17/905b lim: 85 exec/s: 33 rss: 73Mb L: 85/85 MS: 1 ChangeByte- 00:09:43.190 [2024-07-15 12:26:38.195870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.190 [2024-07-15 12:26:38.195897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.195962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.190 [2024-07-15 12:26:38.195991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.190 #34 NEW cov: 12214 ft: 14360 corp: 18/954b lim: 85 exec/s: 34 rss: 73Mb L: 49/85 MS: 1 ChangeByte- 00:09:43.190 [2024-07-15 12:26:38.256012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.190 [2024-07-15 12:26:38.256039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.256099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.190 [2024-07-15 12:26:38.256115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.190 #35 NEW cov: 12214 ft: 14441 corp: 19/1003b lim: 85 exec/s: 35 rss: 73Mb L: 49/85 MS: 1 ChangeASCIIInt- 00:09:43.190 [2024-07-15 12:26:38.307290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.190 [2024-07-15 12:26:38.307318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.307407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.190 [2024-07-15 12:26:38.307426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.307492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.190 [2024-07-15 12:26:38.307511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.307608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:43.190 [2024-07-15 12:26:38.307628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.190 [2024-07-15 12:26:38.307710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:09:43.190 [2024-07-15 12:26:38.307730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:43.448 #36 NEW cov: 12214 ft: 14496 corp: 20/1088b lim: 85 exec/s: 36 rss: 73Mb L: 85/85 MS: 1 ChangeBinInt- 00:09:43.448 [2024-07-15 12:26:38.366471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.448 [2024-07-15 12:26:38.366498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.448 [2024-07-15 12:26:38.366591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.448 [2024-07-15 12:26:38.366607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.448 #37 NEW cov: 12214 ft: 14548 corp: 21/1138b lim: 85 exec/s: 37 rss: 73Mb L: 50/85 MS: 1 InsertByte- 00:09:43.448 [2024-07-15 12:26:38.416367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.448 [2024-07-15 12:26:38.416395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.448 #38 NEW cov: 12214 ft: 15326 corp: 22/1163b lim: 85 exec/s: 38 rss: 73Mb L: 25/85 MS: 1 EraseBytes- 00:09:43.448 [2024-07-15 12:26:38.466600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.448 [2024-07-15 12:26:38.466627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.448 #39 NEW cov: 12214 ft: 15359 corp: 23/1196b lim: 85 exec/s: 39 rss: 73Mb L: 33/85 MS: 1 EraseBytes- 00:09:43.448 [2024-07-15 12:26:38.527113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.448 [2024-07-15 12:26:38.527142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.448 [2024-07-15 12:26:38.527207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.448 [2024-07-15 12:26:38.527227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.448 #40 NEW cov: 12214 ft: 15381 corp: 24/1245b lim: 85 exec/s: 40 rss: 73Mb L: 49/85 MS: 1 CopyPart- 00:09:43.705 [2024-07-15 12:26:38.587606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.705 [2024-07-15 12:26:38.587636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.587702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.705 [2024-07-15 12:26:38.587722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.587784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.705 [2024-07-15 12:26:38.587802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.705 #45 NEW cov: 12214 ft: 15399 corp: 25/1304b lim: 85 exec/s: 45 rss: 73Mb L: 59/85 MS: 5 CopyPart-InsertByte-ChangeBit-CopyPart-InsertRepeatedBytes- 00:09:43.705 [2024-07-15 12:26:38.637784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.705 [2024-07-15 12:26:38.637815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.637875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.705 [2024-07-15 12:26:38.637894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.637956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.705 [2024-07-15 12:26:38.637975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.705 #46 NEW cov: 12214 ft: 15410 corp: 26/1362b lim: 85 exec/s: 46 rss: 73Mb L: 58/85 MS: 1 InsertRepeatedBytes- 00:09:43.705 [2024-07-15 12:26:38.687832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.705 [2024-07-15 12:26:38.687858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.687936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.705 [2024-07-15 12:26:38.687954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.688036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.705 [2024-07-15 12:26:38.688051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.705 #47 NEW cov: 12214 ft: 15436 corp: 27/1416b lim: 85 exec/s: 47 rss: 73Mb L: 54/85 MS: 1 CMP- DE: "\377\377\0001"- 00:09:43.705 [2024-07-15 12:26:38.738460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.705 [2024-07-15 12:26:38.738488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.738573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.705 [2024-07-15 12:26:38.738591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.738669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.705 [2024-07-15 12:26:38.738686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.738769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:43.705 [2024-07-15 12:26:38.738788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.705 #48 NEW cov: 12214 ft: 15460 corp: 28/1487b lim: 85 exec/s: 48 rss: 73Mb L: 71/85 MS: 1 InsertRepeatedBytes- 00:09:43.705 [2024-07-15 12:26:38.787925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.705 [2024-07-15 12:26:38.787952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.705 [2024-07-15 12:26:38.788058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.705 [2024-07-15 12:26:38.788077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.705 #49 NEW cov: 12214 ft: 15479 corp: 29/1536b lim: 85 exec/s: 49 rss: 73Mb L: 49/85 MS: 1 ChangeBinInt- 00:09:43.963 [2024-07-15 12:26:38.838230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.963 [2024-07-15 12:26:38.838257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.963 [2024-07-15 12:26:38.838354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.963 [2024-07-15 12:26:38.838373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.963 #50 NEW cov: 12214 ft: 15520 corp: 30/1586b lim: 85 exec/s: 50 rss: 73Mb L: 50/85 MS: 1 InsertByte- 00:09:43.963 [2024-07-15 12:26:38.888292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.963 [2024-07-15 12:26:38.888319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.963 [2024-07-15 12:26:38.888405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.963 [2024-07-15 12:26:38.888422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.963 #51 NEW cov: 12214 ft: 15533 corp: 31/1635b lim: 85 exec/s: 51 rss: 73Mb L: 49/85 MS: 1 ChangeBit- 00:09:43.963 [2024-07-15 12:26:38.939157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.963 [2024-07-15 12:26:38.939184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.963 [2024-07-15 12:26:38.939253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.963 [2024-07-15 12:26:38.939273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.963 [2024-07-15 12:26:38.939325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:43.963 [2024-07-15 12:26:38.939343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.964 [2024-07-15 12:26:38.939430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:43.964 [2024-07-15 12:26:38.939449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.964 #52 NEW cov: 12214 ft: 15546 corp: 32/1711b lim: 85 exec/s: 52 rss: 73Mb L: 76/85 MS: 1 ChangeBinInt- 00:09:43.964 [2024-07-15 12:26:38.998726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:43.964 [2024-07-15 12:26:38.998753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.964 [2024-07-15 12:26:38.998858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:43.964 [2024-07-15 12:26:38.998875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.964 #53 NEW cov: 12214 ft: 15550 corp: 33/1760b lim: 85 exec/s: 26 rss: 73Mb L: 49/85 MS: 1 ShuffleBytes- 00:09:43.964 #53 DONE cov: 12214 ft: 15550 corp: 33/1760b lim: 85 exec/s: 26 rss: 73Mb 00:09:43.964 ###### Recommended dictionary. ###### 00:09:43.964 "\377\377\0001" # Uses: 0 00:09:43.964 ###### End of recommended dictionary. ###### 00:09:43.964 Done 53 runs in 2 second(s) 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:44.222 12:26:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:44.222 [2024-07-15 12:26:39.196301] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:44.222 [2024-07-15 12:26:39.196384] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166193 ] 00:09:44.222 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.480 [2024-07-15 12:26:39.486961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.480 [2024-07-15 12:26:39.582378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.738 [2024-07-15 12:26:39.642384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.738 [2024-07-15 12:26:39.658589] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:44.738 INFO: Running with entropic power schedule (0xFF, 100). 00:09:44.738 INFO: Seed: 2639458018 00:09:44.738 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:44.738 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:44.738 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:44.738 INFO: A corpus is not provided, starting from an empty corpus 00:09:44.738 #2 INITED exec/s: 0 rss: 65Mb 00:09:44.738 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:44.738 This may also happen if the target rejected all inputs we tried so far 00:09:44.738 [2024-07-15 12:26:39.703444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:44.739 [2024-07-15 12:26:39.703482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.739 [2024-07-15 12:26:39.703540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:44.739 [2024-07-15 12:26:39.703560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.739 [2024-07-15 12:26:39.703591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:44.739 [2024-07-15 12:26:39.703608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.739 [2024-07-15 12:26:39.703637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:44.739 [2024-07-15 12:26:39.703658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:44.997 NEW_FUNC[1/696]: 0x4ae840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:44.997 NEW_FUNC[2/696]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:44.997 #8 NEW cov: 11902 ft: 11901 corp: 2/22b lim: 25 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:09:44.997 [2024-07-15 12:26:40.086716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:44.997 [2024-07-15 12:26:40.086768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.997 [2024-07-15 12:26:40.086849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:44.997 [2024-07-15 12:26:40.086870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.997 [2024-07-15 12:26:40.086952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:44.997 [2024-07-15 12:26:40.086972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.997 [2024-07-15 12:26:40.087057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:44.997 [2024-07-15 12:26:40.087077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:44.997 #14 NEW cov: 12033 ft: 12496 corp: 3/43b lim: 25 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 ShuffleBytes- 00:09:45.255 [2024-07-15 12:26:40.156869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.255 [2024-07-15 12:26:40.156900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.255 [2024-07-15 12:26:40.156962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.255 [2024-07-15 12:26:40.156979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.255 [2024-07-15 12:26:40.157050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.255 [2024-07-15 12:26:40.157067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.157152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.256 [2024-07-15 12:26:40.157169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.256 #15 NEW cov: 12039 ft: 12827 corp: 4/65b lim: 25 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 InsertByte- 00:09:45.256 [2024-07-15 12:26:40.207002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.256 [2024-07-15 12:26:40.207033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.207118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.256 [2024-07-15 12:26:40.207138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.207198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.256 [2024-07-15 12:26:40.207218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.207303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.256 [2024-07-15 12:26:40.207325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.256 #16 NEW cov: 12124 ft: 13207 corp: 5/88b lim: 25 exec/s: 0 rss: 71Mb L: 23/23 MS: 1 CopyPart- 00:09:45.256 [2024-07-15 12:26:40.267251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.256 [2024-07-15 12:26:40.267280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.267358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.256 [2024-07-15 12:26:40.267376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.267447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.256 [2024-07-15 12:26:40.267463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.267555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.256 [2024-07-15 12:26:40.267579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.256 #17 NEW cov: 12124 ft: 13325 corp: 6/112b lim: 25 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertByte- 00:09:45.256 [2024-07-15 12:26:40.327443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.256 [2024-07-15 12:26:40.327474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.327544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.256 [2024-07-15 12:26:40.327567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.327626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.256 [2024-07-15 12:26:40.327645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.327731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.256 [2024-07-15 12:26:40.327747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.256 #18 NEW cov: 12124 ft: 13373 corp: 7/133b lim: 25 exec/s: 0 rss: 72Mb L: 21/24 MS: 1 ChangeByte- 00:09:45.256 [2024-07-15 12:26:40.377650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.256 [2024-07-15 12:26:40.377682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.377757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.256 [2024-07-15 12:26:40.377789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.377855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.256 [2024-07-15 12:26:40.377874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.256 [2024-07-15 12:26:40.377968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.256 [2024-07-15 12:26:40.377991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.514 #19 NEW cov: 12124 ft: 13454 corp: 8/155b lim: 25 exec/s: 0 rss: 72Mb L: 22/24 MS: 1 CrossOver- 00:09:45.514 [2024-07-15 12:26:40.448036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.515 [2024-07-15 12:26:40.448066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.448129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.515 [2024-07-15 12:26:40.448149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.448199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.515 [2024-07-15 12:26:40.448221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.448309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.515 [2024-07-15 12:26:40.448331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.515 #20 NEW cov: 12124 ft: 13533 corp: 9/177b lim: 25 exec/s: 0 rss: 72Mb L: 22/24 MS: 1 ChangeBinInt- 00:09:45.515 [2024-07-15 12:26:40.507973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.515 [2024-07-15 12:26:40.508004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.508067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.515 [2024-07-15 12:26:40.508086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.508149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.515 [2024-07-15 12:26:40.508167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.515 #21 NEW cov: 12124 ft: 14034 corp: 10/194b lim: 25 exec/s: 0 rss: 72Mb L: 17/24 MS: 1 InsertRepeatedBytes- 00:09:45.515 [2024-07-15 12:26:40.557910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.515 [2024-07-15 12:26:40.557940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.558011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.515 [2024-07-15 12:26:40.558032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.515 #22 NEW cov: 12124 ft: 14329 corp: 11/207b lim: 25 exec/s: 0 rss: 72Mb L: 13/24 MS: 1 InsertRepeatedBytes- 00:09:45.515 [2024-07-15 12:26:40.608573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.515 [2024-07-15 12:26:40.608603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.608700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.515 [2024-07-15 12:26:40.608722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.608801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.515 [2024-07-15 12:26:40.608823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.515 [2024-07-15 12:26:40.608912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.515 [2024-07-15 12:26:40.608932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.515 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:45.515 #23 NEW cov: 12147 ft: 14437 corp: 12/230b lim: 25 exec/s: 0 rss: 72Mb L: 23/24 MS: 1 ChangeBinInt- 00:09:45.774 [2024-07-15 12:26:40.658755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.774 [2024-07-15 12:26:40.658789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.658865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.774 [2024-07-15 12:26:40.658885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.658941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.774 [2024-07-15 12:26:40.658959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.659046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.774 [2024-07-15 12:26:40.659070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.774 #24 NEW cov: 12147 ft: 14452 corp: 13/254b lim: 25 exec/s: 24 rss: 72Mb L: 24/24 MS: 1 CopyPart- 00:09:45.774 [2024-07-15 12:26:40.728304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.774 [2024-07-15 12:26:40.728336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.774 #25 NEW cov: 12147 ft: 14843 corp: 14/262b lim: 25 exec/s: 25 rss: 72Mb L: 8/24 MS: 1 EraseBytes- 00:09:45.774 [2024-07-15 12:26:40.798705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.774 [2024-07-15 12:26:40.798734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.798794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.774 [2024-07-15 12:26:40.798813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.774 #26 NEW cov: 12147 ft: 14886 corp: 15/275b lim: 25 exec/s: 26 rss: 72Mb L: 13/24 MS: 1 EraseBytes- 00:09:45.774 [2024-07-15 12:26:40.859503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:45.774 [2024-07-15 12:26:40.859539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.859644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:45.774 [2024-07-15 12:26:40.859662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.859730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:45.774 [2024-07-15 12:26:40.859752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.774 [2024-07-15 12:26:40.859841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:45.774 [2024-07-15 12:26:40.859865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:45.774 #27 NEW cov: 12147 ft: 14913 corp: 16/299b lim: 25 exec/s: 27 rss: 72Mb L: 24/24 MS: 1 ChangeBinInt- 00:09:46.033 [2024-07-15 12:26:40.909706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.033 [2024-07-15 12:26:40.909741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:40.909802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.033 [2024-07-15 12:26:40.909835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:40.909882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.033 [2024-07-15 12:26:40.909902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:40.909988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.033 [2024-07-15 12:26:40.910005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.033 #28 NEW cov: 12147 ft: 14930 corp: 17/320b lim: 25 exec/s: 28 rss: 72Mb L: 21/24 MS: 1 ChangeBit- 00:09:46.033 [2024-07-15 12:26:40.959820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.033 [2024-07-15 12:26:40.959849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:40.959921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.033 [2024-07-15 12:26:40.959940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:40.960008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.033 [2024-07-15 12:26:40.960026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:40.960115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.033 [2024-07-15 12:26:40.960132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.033 #29 NEW cov: 12147 ft: 14955 corp: 18/341b lim: 25 exec/s: 29 rss: 72Mb L: 21/24 MS: 1 ShuffleBytes- 00:09:46.033 [2024-07-15 12:26:41.019580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.033 [2024-07-15 12:26:41.019609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:41.019669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.033 [2024-07-15 12:26:41.019687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.033 #30 NEW cov: 12147 ft: 14981 corp: 19/354b lim: 25 exec/s: 30 rss: 72Mb L: 13/24 MS: 1 ShuffleBytes- 00:09:46.033 [2024-07-15 12:26:41.080247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.033 [2024-07-15 12:26:41.080275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:41.080354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.033 [2024-07-15 12:26:41.080374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:41.080456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.033 [2024-07-15 12:26:41.080475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:41.080569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.033 [2024-07-15 12:26:41.080594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.033 #31 NEW cov: 12147 ft: 14995 corp: 20/375b lim: 25 exec/s: 31 rss: 72Mb L: 21/24 MS: 1 EraseBytes- 00:09:46.033 [2024-07-15 12:26:41.140404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.033 [2024-07-15 12:26:41.140435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.033 [2024-07-15 12:26:41.140506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.034 [2024-07-15 12:26:41.140524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.034 [2024-07-15 12:26:41.140623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.034 [2024-07-15 12:26:41.140643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.034 [2024-07-15 12:26:41.140720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.034 [2024-07-15 12:26:41.140738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.343 #32 NEW cov: 12147 ft: 15029 corp: 21/399b lim: 25 exec/s: 32 rss: 72Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:09:46.343 [2024-07-15 12:26:41.190539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.343 [2024-07-15 12:26:41.190567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.343 [2024-07-15 12:26:41.190657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.343 [2024-07-15 12:26:41.190677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.343 [2024-07-15 12:26:41.190761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.343 [2024-07-15 12:26:41.190786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.343 [2024-07-15 12:26:41.190871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.343 [2024-07-15 12:26:41.190890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.343 #33 NEW cov: 12147 ft: 15051 corp: 22/420b lim: 25 exec/s: 33 rss: 73Mb L: 21/24 MS: 1 ChangeBinInt- 00:09:46.343 [2024-07-15 12:26:41.250225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.343 [2024-07-15 12:26:41.250256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.343 #34 NEW cov: 12147 ft: 15159 corp: 23/428b lim: 25 exec/s: 34 rss: 73Mb L: 8/24 MS: 1 ShuffleBytes- 00:09:46.343 [2024-07-15 12:26:41.310533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.344 [2024-07-15 12:26:41.310561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.344 [2024-07-15 12:26:41.310638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.344 [2024-07-15 12:26:41.310656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.344 #35 NEW cov: 12147 ft: 15168 corp: 24/439b lim: 25 exec/s: 35 rss: 73Mb L: 11/24 MS: 1 EraseBytes- 00:09:46.344 [2024-07-15 12:26:41.370661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.344 [2024-07-15 12:26:41.370690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.344 [2024-07-15 12:26:41.370767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.344 [2024-07-15 12:26:41.370783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.344 #36 NEW cov: 12147 ft: 15212 corp: 25/451b lim: 25 exec/s: 36 rss: 73Mb L: 12/24 MS: 1 EraseBytes- 00:09:46.344 [2024-07-15 12:26:41.420640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.344 [2024-07-15 12:26:41.420670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.614 #37 NEW cov: 12147 ft: 15234 corp: 26/457b lim: 25 exec/s: 37 rss: 73Mb L: 6/24 MS: 1 EraseBytes- 00:09:46.614 [2024-07-15 12:26:41.470999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.614 [2024-07-15 12:26:41.471028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.471128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.614 [2024-07-15 12:26:41.471149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.614 #38 NEW cov: 12147 ft: 15245 corp: 27/468b lim: 25 exec/s: 38 rss: 73Mb L: 11/24 MS: 1 ChangeByte- 00:09:46.614 [2024-07-15 12:26:41.530950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.614 [2024-07-15 12:26:41.530980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.614 #39 NEW cov: 12147 ft: 15256 corp: 28/477b lim: 25 exec/s: 39 rss: 73Mb L: 9/24 MS: 1 CrossOver- 00:09:46.614 [2024-07-15 12:26:41.581941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.614 [2024-07-15 12:26:41.581971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.582045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.614 [2024-07-15 12:26:41.582064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.582141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.614 [2024-07-15 12:26:41.582163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.582250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.614 [2024-07-15 12:26:41.582268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.614 #40 NEW cov: 12147 ft: 15267 corp: 29/501b lim: 25 exec/s: 40 rss: 73Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:09:46.614 [2024-07-15 12:26:41.642147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.614 [2024-07-15 12:26:41.642176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.642242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.614 [2024-07-15 12:26:41.642261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.642327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.614 [2024-07-15 12:26:41.642345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.642443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.614 [2024-07-15 12:26:41.642465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.614 #41 NEW cov: 12147 ft: 15270 corp: 30/522b lim: 25 exec/s: 41 rss: 73Mb L: 21/24 MS: 1 ChangeBinInt- 00:09:46.614 [2024-07-15 12:26:41.712408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:46.614 [2024-07-15 12:26:41.712439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.712510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:46.614 [2024-07-15 12:26:41.712535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.712596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:46.614 [2024-07-15 12:26:41.712623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.614 [2024-07-15 12:26:41.712713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:46.614 [2024-07-15 12:26:41.712734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.614 #42 NEW cov: 12147 ft: 15283 corp: 31/544b lim: 25 exec/s: 21 rss: 73Mb L: 22/24 MS: 1 CrossOver- 00:09:46.614 #42 DONE cov: 12147 ft: 15283 corp: 31/544b lim: 25 exec/s: 21 rss: 73Mb 00:09:46.614 Done 42 runs in 2 second(s) 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:46.873 12:26:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:46.873 [2024-07-15 12:26:41.905208] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:46.873 [2024-07-15 12:26:41.905282] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166592 ] 00:09:46.873 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.132 [2024-07-15 12:26:42.189745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.391 [2024-07-15 12:26:42.285798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.391 [2024-07-15 12:26:42.345366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.391 [2024-07-15 12:26:42.361581] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:47.391 INFO: Running with entropic power schedule (0xFF, 100). 00:09:47.391 INFO: Seed: 1045516177 00:09:47.391 INFO: Loaded 1 modules (357815 inline 8-bit counters): 357815 [0x29ab10c, 0x2a026c3), 00:09:47.391 INFO: Loaded 1 PC tables (357815 PCs): 357815 [0x2a026c8,0x2f78238), 00:09:47.391 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:47.391 INFO: A corpus is not provided, starting from an empty corpus 00:09:47.391 #2 INITED exec/s: 0 rss: 65Mb 00:09:47.391 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:47.391 This may also happen if the target rejected all inputs we tried so far 00:09:47.391 [2024-07-15 12:26:42.410062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.391 [2024-07-15 12:26:42.410095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.391 [2024-07-15 12:26:42.410146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.391 [2024-07-15 12:26:42.410162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.391 [2024-07-15 12:26:42.410215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.391 [2024-07-15 12:26:42.410230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.651 NEW_FUNC[1/697]: 0x4af920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:47.651 NEW_FUNC[2/697]: 0x4c0580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:47.651 #4 NEW cov: 11975 ft: 11972 corp: 2/66b lim: 100 exec/s: 0 rss: 71Mb L: 65/65 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:47.651 [2024-07-15 12:26:42.750725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.651 [2024-07-15 12:26:42.750771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.651 #5 NEW cov: 12105 ft: 13389 corp: 3/92b lim: 100 exec/s: 0 rss: 72Mb L: 26/65 MS: 1 InsertRepeatedBytes- 00:09:47.910 [2024-07-15 12:26:42.791069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.791099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.910 [2024-07-15 12:26:42.791135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.791152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.910 [2024-07-15 12:26:42.791206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.791225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.910 #6 NEW cov: 12111 ft: 13718 corp: 4/157b lim: 100 exec/s: 0 rss: 72Mb L: 65/65 MS: 1 CMP- DE: "\005\000\000\000\000\000\000\000"- 00:09:47.910 [2024-07-15 12:26:42.840888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.840915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.910 #7 NEW cov: 12196 ft: 14039 corp: 5/187b lim: 100 exec/s: 0 rss: 72Mb L: 30/65 MS: 1 CMP- DE: "\376\377\000\000"- 00:09:47.910 [2024-07-15 12:26:42.891098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.891125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.910 #8 NEW cov: 12196 ft: 14167 corp: 6/213b lim: 100 exec/s: 0 rss: 72Mb L: 26/65 MS: 1 CrossOver- 00:09:47.910 [2024-07-15 12:26:42.931419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.931447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.910 [2024-07-15 12:26:42.931490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.910 [2024-07-15 12:26:42.931506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.910 [2024-07-15 12:26:42.931582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:42.931597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.911 #9 NEW cov: 12196 ft: 14264 corp: 7/278b lim: 100 exec/s: 0 rss: 72Mb L: 65/65 MS: 1 CopyPart- 00:09:47.911 [2024-07-15 12:26:42.981630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:42.981658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.911 [2024-07-15 12:26:42.981697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:44 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:42.981713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.911 [2024-07-15 12:26:42.981772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:42.981787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.911 #10 NEW cov: 12196 ft: 14390 corp: 8/344b lim: 100 exec/s: 0 rss: 72Mb L: 66/66 MS: 1 InsertByte- 00:09:47.911 [2024-07-15 12:26:43.021862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1644822528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:43.021891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.911 [2024-07-15 12:26:43.021947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:43.021963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.911 [2024-07-15 12:26:43.022020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:43.022036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.911 [2024-07-15 12:26:43.022092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.911 [2024-07-15 12:26:43.022107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.170 #13 NEW cov: 12196 ft: 14766 corp: 9/435b lim: 100 exec/s: 0 rss: 72Mb L: 91/91 MS: 3 InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:09:48.170 [2024-07-15 12:26:43.062151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1644822528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.062180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.062231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3486502863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.062248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.062305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.062322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.062376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.062392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.062447] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.062463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:48.170 #14 NEW cov: 12196 ft: 14853 corp: 10/535b lim: 100 exec/s: 0 rss: 72Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:48.170 [2024-07-15 12:26:43.112080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.112110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.112153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:270582939648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.112171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.112230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.112248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.170 #15 NEW cov: 12196 ft: 14885 corp: 11/600b lim: 100 exec/s: 0 rss: 72Mb L: 65/100 MS: 1 ChangeByte- 00:09:48.170 [2024-07-15 12:26:43.152184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.152215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.152256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.152278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.152335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.152352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.170 #16 NEW cov: 12196 ft: 14949 corp: 12/670b lim: 100 exec/s: 0 rss: 72Mb L: 70/100 MS: 1 CopyPart- 00:09:48.170 [2024-07-15 12:26:43.212007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.212038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.170 #17 NEW cov: 12196 ft: 14961 corp: 13/692b lim: 100 exec/s: 0 rss: 72Mb L: 22/100 MS: 1 EraseBytes- 00:09:48.170 [2024-07-15 12:26:43.252387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.170 [2024-07-15 12:26:43.252416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.170 [2024-07-15 12:26:43.252453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.171 [2024-07-15 12:26:43.252470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.171 [2024-07-15 12:26:43.252534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.171 [2024-07-15 12:26:43.252551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.171 #18 NEW cov: 12196 ft: 14978 corp: 14/757b lim: 100 exec/s: 0 rss: 72Mb L: 65/100 MS: 1 CopyPart- 00:09:48.171 [2024-07-15 12:26:43.292446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.171 [2024-07-15 12:26:43.292473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.171 [2024-07-15 12:26:43.292519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:253 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.171 [2024-07-15 12:26:43.292540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.171 [2024-07-15 12:26:43.292611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.171 [2024-07-15 12:26:43.292628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.429 NEW_FUNC[1/1]: 0x1a7e210 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:48.429 #19 NEW cov: 12219 ft: 15069 corp: 15/827b lim: 100 exec/s: 0 rss: 73Mb L: 70/100 MS: 1 ChangeBinInt- 00:09:48.429 [2024-07-15 12:26:43.342746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.342774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.342823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.342839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.342894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.342913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.342969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.342985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.429 #20 NEW cov: 12219 ft: 15101 corp: 16/917b lim: 100 exec/s: 0 rss: 73Mb L: 90/100 MS: 1 InsertRepeatedBytes- 00:09:48.429 [2024-07-15 12:26:43.382753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.382782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.382819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.382834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.382892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.382910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.429 #21 NEW cov: 12219 ft: 15149 corp: 17/984b lim: 100 exec/s: 21 rss: 73Mb L: 67/100 MS: 1 EraseBytes- 00:09:48.429 [2024-07-15 12:26:43.433015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.433042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.433092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.433109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.433166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.433182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.429 [2024-07-15 12:26:43.433237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.429 [2024-07-15 12:26:43.433253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.430 #22 NEW cov: 12219 ft: 15167 corp: 18/1071b lim: 100 exec/s: 22 rss: 73Mb L: 87/100 MS: 1 EraseBytes- 00:09:48.430 [2024-07-15 12:26:43.483024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.483051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.430 [2024-07-15 12:26:43.483091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:253 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.483107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.430 [2024-07-15 12:26:43.483162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.483181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.430 #23 NEW cov: 12219 ft: 15235 corp: 19/1141b lim: 100 exec/s: 23 rss: 73Mb L: 70/100 MS: 1 ChangeBinInt- 00:09:48.430 [2024-07-15 12:26:43.523293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1644822528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.523319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.430 [2024-07-15 12:26:43.523370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:3486502863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.523387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.430 [2024-07-15 12:26:43.523442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.523456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.430 [2024-07-15 12:26:43.523513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.430 [2024-07-15 12:26:43.523533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.688 #24 NEW cov: 12219 ft: 15261 corp: 20/1222b lim: 100 exec/s: 24 rss: 73Mb L: 81/100 MS: 1 EraseBytes- 00:09:48.688 [2024-07-15 12:26:43.573435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.688 [2024-07-15 12:26:43.573462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.688 [2024-07-15 12:26:43.573536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.688 [2024-07-15 12:26:43.573553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.688 [2024-07-15 12:26:43.573609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.688 [2024-07-15 12:26:43.573626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.688 [2024-07-15 12:26:43.573683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.688 [2024-07-15 12:26:43.573697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.688 #25 NEW cov: 12219 ft: 15275 corp: 21/1313b lim: 100 exec/s: 25 rss: 73Mb L: 91/100 MS: 1 InsertByte- 00:09:48.688 [2024-07-15 12:26:43.613435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.688 [2024-07-15 12:26:43.613463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.688 [2024-07-15 12:26:43.613502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.613518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.613576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.613595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.689 #26 NEW cov: 12219 ft: 15332 corp: 22/1373b lim: 100 exec/s: 26 rss: 73Mb L: 60/100 MS: 1 EraseBytes- 00:09:48.689 [2024-07-15 12:26:43.663260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.663287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.689 #27 NEW cov: 12219 ft: 15390 corp: 23/1395b lim: 100 exec/s: 27 rss: 73Mb L: 22/100 MS: 1 ChangeByte- 00:09:48.689 [2024-07-15 12:26:43.713725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.713752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.713791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14411517948592128 len:13108 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.713807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.713862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.713878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.689 #28 NEW cov: 12219 ft: 15425 corp: 24/1472b lim: 100 exec/s: 28 rss: 73Mb L: 77/100 MS: 1 InsertRepeatedBytes- 00:09:48.689 [2024-07-15 12:26:43.753971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.753998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.754046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.754063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.754117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12643988412655237240 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.754133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.754189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.754205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.689 #29 NEW cov: 12219 ft: 15444 corp: 25/1563b lim: 100 exec/s: 29 rss: 73Mb L: 91/100 MS: 1 ChangeByte- 00:09:48.689 [2024-07-15 12:26:43.804110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.804136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.804185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.804202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.804256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.804274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.689 [2024-07-15 12:26:43.804332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.689 [2024-07-15 12:26:43.804346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.948 #30 NEW cov: 12219 ft: 15464 corp: 26/1654b lim: 100 exec/s: 30 rss: 73Mb L: 91/100 MS: 1 ChangeBinInt- 00:09:48.948 [2024-07-15 12:26:43.844090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.844117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.844163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.844179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.844248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.844264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.948 #31 NEW cov: 12219 ft: 15478 corp: 27/1719b lim: 100 exec/s: 31 rss: 73Mb L: 65/100 MS: 1 ChangeByte- 00:09:48.948 [2024-07-15 12:26:43.884330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.884356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.884404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.884419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.884472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.884488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.884545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.884562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:48.948 #32 NEW cov: 12219 ft: 15500 corp: 28/1801b lim: 100 exec/s: 32 rss: 73Mb L: 82/100 MS: 1 EraseBytes- 00:09:48.948 [2024-07-15 12:26:43.924282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.924310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.924351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:253 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.924367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.924421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.924440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.948 #33 NEW cov: 12219 ft: 15568 corp: 29/1871b lim: 100 exec/s: 33 rss: 73Mb L: 70/100 MS: 1 ChangeBit- 00:09:48.948 [2024-07-15 12:26:43.974434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.974460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.974509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:253 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.974525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:43.974583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:43.974599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.948 #34 NEW cov: 12219 ft: 15579 corp: 30/1949b lim: 100 exec/s: 34 rss: 73Mb L: 78/100 MS: 1 InsertRepeatedBytes- 00:09:48.948 [2024-07-15 12:26:44.024597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:44.024624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:44.024662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:44.024678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:44.024735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:44.024751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.948 #35 NEW cov: 12219 ft: 15585 corp: 31/2014b lim: 100 exec/s: 35 rss: 73Mb L: 65/100 MS: 1 ShuffleBytes- 00:09:48.948 [2024-07-15 12:26:44.064716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7588065583954920704 len:9985 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:44.064743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:44.064808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.948 [2024-07-15 12:26:44.064825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.948 [2024-07-15 12:26:44.064882] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:48.949 [2024-07-15 12:26:44.064898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:49.208 #36 NEW cov: 12219 ft: 15591 corp: 32/2079b lim: 100 exec/s: 36 rss: 73Mb L: 65/100 MS: 1 CMP- DE: "iN9r\001\212'\000"- 00:09:49.208 [2024-07-15 12:26:44.104810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:450971566080 len:29186 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.104838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.104876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:44 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.104895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.104964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.104981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:49.208 #37 NEW cov: 12219 ft: 15601 corp: 33/2145b lim: 100 exec/s: 37 rss: 73Mb L: 66/100 MS: 1 PersAutoDict- DE: "iN9r\001\212'\000"- 00:09:49.208 [2024-07-15 12:26:44.154928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.154956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.154996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.155011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.155069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.155085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:49.208 #38 NEW cov: 12219 ft: 15622 corp: 34/2210b lim: 100 exec/s: 38 rss: 74Mb L: 65/100 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:09:49.208 [2024-07-15 12:26:44.205113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.205140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.205188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.205205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.205259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.205275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:49.208 #39 NEW cov: 12219 ft: 15627 corp: 35/2285b lim: 100 exec/s: 39 rss: 74Mb L: 75/100 MS: 1 InsertRepeatedBytes- 00:09:49.208 [2024-07-15 12:26:44.245207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.245235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.245273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.245289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.245343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.245357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:49.208 #40 NEW cov: 12219 ft: 15633 corp: 36/2350b lim: 100 exec/s: 40 rss: 74Mb L: 65/100 MS: 1 ChangeBit- 00:09:49.208 [2024-07-15 12:26:44.285493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.285532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.285586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.285603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.285656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:12643988412655237240 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.285673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:49.208 [2024-07-15 12:26:44.285730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:8680820740569200760 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.285745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:49.208 #41 NEW cov: 12219 ft: 15652 corp: 37/2441b lim: 100 exec/s: 41 rss: 74Mb L: 91/100 MS: 1 CopyPart- 00:09:49.208 [2024-07-15 12:26:44.335155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.208 [2024-07-15 12:26:44.335183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.470 #42 NEW cov: 12219 ft: 15657 corp: 38/2467b lim: 100 exec/s: 42 rss: 74Mb L: 26/100 MS: 1 ShuffleBytes- 00:09:49.470 [2024-07-15 12:26:44.375409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.470 [2024-07-15 12:26:44.375436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.470 [2024-07-15 12:26:44.375479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:49.470 [2024-07-15 12:26:44.375495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.470 #43 NEW cov: 12219 ft: 15944 corp: 39/2512b lim: 100 exec/s: 21 rss: 74Mb L: 45/100 MS: 1 EraseBytes- 00:09:49.470 #43 DONE cov: 12219 ft: 15944 corp: 39/2512b lim: 100 exec/s: 21 rss: 74Mb 00:09:49.470 ###### Recommended dictionary. ###### 00:09:49.470 "\005\000\000\000\000\000\000\000" # Uses: 0 00:09:49.470 "\376\377\000\000" # Uses: 1 00:09:49.470 "iN9r\001\212'\000" # Uses: 1 00:09:49.470 ###### End of recommended dictionary. ###### 00:09:49.470 Done 43 runs in 2 second(s) 00:09:49.470 12:26:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:49.470 12:26:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:49.470 12:26:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:49.470 12:26:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:49.470 00:09:49.470 real 1m5.836s 00:09:49.470 user 1m41.034s 00:09:49.470 sys 0m8.154s 00:09:49.470 12:26:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.470 12:26:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:49.470 ************************************ 00:09:49.470 END TEST nvmf_llvm_fuzz 00:09:49.470 ************************************ 00:09:49.470 12:26:44 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:09:49.470 12:26:44 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:09:49.470 12:26:44 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:09:49.470 12:26:44 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:49.470 12:26:44 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:49.470 12:26:44 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.470 12:26:44 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:49.732 ************************************ 00:09:49.732 START TEST vfio_llvm_fuzz 00:09:49.732 ************************************ 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:49.732 * Looking for test storage... 00:09:49.732 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:49.732 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:49.733 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:49.733 #define SPDK_CONFIG_H 00:09:49.733 #define SPDK_CONFIG_APPS 1 00:09:49.733 #define SPDK_CONFIG_ARCH native 00:09:49.733 #undef SPDK_CONFIG_ASAN 00:09:49.733 #undef SPDK_CONFIG_AVAHI 00:09:49.733 #undef SPDK_CONFIG_CET 00:09:49.733 #define SPDK_CONFIG_COVERAGE 1 00:09:49.733 #define SPDK_CONFIG_CROSS_PREFIX 00:09:49.733 #undef SPDK_CONFIG_CRYPTO 00:09:49.733 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:49.733 #undef SPDK_CONFIG_CUSTOMOCF 00:09:49.733 #undef SPDK_CONFIG_DAOS 00:09:49.733 #define SPDK_CONFIG_DAOS_DIR 00:09:49.733 #define SPDK_CONFIG_DEBUG 1 00:09:49.733 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:49.733 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:49.733 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:49.733 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:49.733 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:49.733 #undef SPDK_CONFIG_DPDK_UADK 00:09:49.733 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:49.733 #define SPDK_CONFIG_EXAMPLES 1 00:09:49.733 #undef SPDK_CONFIG_FC 00:09:49.733 #define SPDK_CONFIG_FC_PATH 00:09:49.733 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:49.733 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:49.733 #undef SPDK_CONFIG_FUSE 00:09:49.733 #define SPDK_CONFIG_FUZZER 1 00:09:49.733 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:49.733 #undef SPDK_CONFIG_GOLANG 00:09:49.733 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:49.733 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:49.733 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:49.733 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:49.733 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:49.733 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:49.733 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:49.733 #define SPDK_CONFIG_IDXD 1 00:09:49.733 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:49.733 #undef SPDK_CONFIG_IPSEC_MB 00:09:49.733 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:49.733 #define SPDK_CONFIG_ISAL 1 00:09:49.733 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:49.733 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:49.733 #define SPDK_CONFIG_LIBDIR 00:09:49.733 #undef SPDK_CONFIG_LTO 00:09:49.733 #define SPDK_CONFIG_MAX_LCORES 128 00:09:49.733 #define SPDK_CONFIG_NVME_CUSE 1 00:09:49.733 #undef SPDK_CONFIG_OCF 00:09:49.733 #define SPDK_CONFIG_OCF_PATH 00:09:49.733 #define SPDK_CONFIG_OPENSSL_PATH 00:09:49.733 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:49.733 #define SPDK_CONFIG_PGO_DIR 00:09:49.733 #undef SPDK_CONFIG_PGO_USE 00:09:49.733 #define SPDK_CONFIG_PREFIX /usr/local 00:09:49.733 #undef SPDK_CONFIG_RAID5F 00:09:49.733 #undef SPDK_CONFIG_RBD 00:09:49.733 #define SPDK_CONFIG_RDMA 1 00:09:49.733 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:49.733 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:49.733 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:49.733 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:49.733 #undef SPDK_CONFIG_SHARED 00:09:49.733 #undef SPDK_CONFIG_SMA 00:09:49.733 #define SPDK_CONFIG_TESTS 1 00:09:49.733 #undef SPDK_CONFIG_TSAN 00:09:49.733 #define SPDK_CONFIG_UBLK 1 00:09:49.733 #define SPDK_CONFIG_UBSAN 1 00:09:49.733 #undef SPDK_CONFIG_UNIT_TESTS 00:09:49.733 #undef SPDK_CONFIG_URING 00:09:49.733 #define SPDK_CONFIG_URING_PATH 00:09:49.733 #undef SPDK_CONFIG_URING_ZNS 00:09:49.733 #undef SPDK_CONFIG_USDT 00:09:49.733 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:49.733 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:49.734 #define SPDK_CONFIG_VFIO_USER 1 00:09:49.734 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:49.734 #define SPDK_CONFIG_VHOST 1 00:09:49.734 #define SPDK_CONFIG_VIRTIO 1 00:09:49.734 #undef SPDK_CONFIG_VTUNE 00:09:49.734 #define SPDK_CONFIG_VTUNE_DIR 00:09:49.734 #define SPDK_CONFIG_WERROR 1 00:09:49.734 #define SPDK_CONFIG_WPDK_DIR 00:09:49.734 #undef SPDK_CONFIG_XNVME 00:09:49.734 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:49.734 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:49.735 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 4167063 ]] 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 4167063 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.HB6w0p 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.HB6w0p/tests/vfio /tmp/spdk.HB6w0p 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=893108224 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4391321600 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=87051767808 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508576768 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=7456808960 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47198650368 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.736 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895826944 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5890048 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253770240 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254290432 00:09:49.995 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=520192 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:49.996 * Looking for test storage... 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=87051767808 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=9671401472 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:49.996 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:49.996 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:49.996 12:26:44 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:49.996 [2024-07-15 12:26:44.937057] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:49.996 [2024-07-15 12:26:44.937154] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167108 ] 00:09:49.996 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.996 [2024-07-15 12:26:45.016951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.996 [2024-07-15 12:26:45.105033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.255 INFO: Running with entropic power schedule (0xFF, 100). 00:09:50.255 INFO: Seed: 3974481064 00:09:50.255 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:09:50.255 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:09:50.255 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:50.255 INFO: A corpus is not provided, starting from an empty corpus 00:09:50.255 #2 INITED exec/s: 0 rss: 66Mb 00:09:50.255 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:50.255 This may also happen if the target rejected all inputs we tried so far 00:09:50.255 [2024-07-15 12:26:45.359406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:50.772 NEW_FUNC[1/645]: 0x4838a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:50.772 NEW_FUNC[2/645]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:50.772 #11 NEW cov: 10850 ft: 10929 corp: 2/7b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 4 InsertByte-InsertRepeatedBytes-ChangeBit-InsertByte- 00:09:51.030 NEW_FUNC[1/13]: 0x1415230 in map_one /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:731 00:09:51.030 NEW_FUNC[2/13]: 0x16cdfb0 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:09:51.030 #12 NEW cov: 10977 ft: 15119 corp: 3/13b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:51.289 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:51.289 #14 NEW cov: 11001 ft: 16131 corp: 4/19b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:51.289 #18 NEW cov: 11004 ft: 16453 corp: 5/25b lim: 6 exec/s: 18 rss: 74Mb L: 6/6 MS: 4 EraseBytes-CopyPart-ShuffleBytes-CopyPart- 00:09:51.546 #19 NEW cov: 11004 ft: 16596 corp: 6/31b lim: 6 exec/s: 19 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:09:51.804 #20 NEW cov: 11004 ft: 17319 corp: 7/37b lim: 6 exec/s: 20 rss: 74Mb L: 6/6 MS: 1 ChangeByte- 00:09:52.062 #21 NEW cov: 11004 ft: 17586 corp: 8/43b lim: 6 exec/s: 21 rss: 74Mb L: 6/6 MS: 1 ChangeBit- 00:09:52.062 #22 NEW cov: 11011 ft: 17820 corp: 9/49b lim: 6 exec/s: 22 rss: 74Mb L: 6/6 MS: 1 CopyPart- 00:09:52.321 #23 NEW cov: 11011 ft: 18015 corp: 10/55b lim: 6 exec/s: 23 rss: 74Mb L: 6/6 MS: 1 CrossOver- 00:09:52.579 #24 NEW cov: 11011 ft: 18062 corp: 11/61b lim: 6 exec/s: 12 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:52.579 #24 DONE cov: 11011 ft: 18062 corp: 11/61b lim: 6 exec/s: 12 rss: 74Mb 00:09:52.579 Done 24 runs in 2 second(s) 00:09:52.579 [2024-07-15 12:26:47.486727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:52.838 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:52.838 12:26:47 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:52.838 [2024-07-15 12:26:47.803017] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:52.838 [2024-07-15 12:26:47.803116] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167470 ] 00:09:52.838 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.838 [2024-07-15 12:26:47.889602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.096 [2024-07-15 12:26:47.974719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.096 INFO: Running with entropic power schedule (0xFF, 100). 00:09:53.096 INFO: Seed: 2547519064 00:09:53.096 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:09:53.096 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:09:53.096 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:53.096 INFO: A corpus is not provided, starting from an empty corpus 00:09:53.096 #2 INITED exec/s: 0 rss: 66Mb 00:09:53.096 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:53.096 This may also happen if the target rejected all inputs we tried so far 00:09:53.355 [2024-07-15 12:26:48.234929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:53.355 [2024-07-15 12:26:48.289567] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:53.355 [2024-07-15 12:26:48.289612] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:53.355 [2024-07-15 12:26:48.289629] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:53.615 NEW_FUNC[1/659]: 0x483e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:53.615 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:53.615 #6 NEW cov: 10918 ft: 10923 corp: 2/5b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 4 ChangeBit-InsertByte-CrossOver-CrossOver- 00:09:53.873 [2024-07-15 12:26:48.771946] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:53.873 [2024-07-15 12:26:48.771991] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:53.873 [2024-07-15 12:26:48.772010] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:53.873 NEW_FUNC[1/1]: 0x1422a10 in _nvmf_vfio_user_req_free /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:5305 00:09:53.873 #22 NEW cov: 10970 ft: 14225 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:09:53.873 [2024-07-15 12:26:48.973808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:53.873 [2024-07-15 12:26:48.973834] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:53.873 [2024-07-15 12:26:48.973855] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.130 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:54.130 #23 NEW cov: 10990 ft: 15387 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:09:54.130 [2024-07-15 12:26:49.165169] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.130 [2024-07-15 12:26:49.165191] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.130 [2024-07-15 12:26:49.165208] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.388 #24 NEW cov: 10990 ft: 16768 corp: 5/17b lim: 4 exec/s: 24 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:09:54.388 [2024-07-15 12:26:49.362259] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.388 [2024-07-15 12:26:49.362281] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.388 [2024-07-15 12:26:49.362298] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.388 #25 NEW cov: 10990 ft: 16911 corp: 6/21b lim: 4 exec/s: 25 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:09:54.647 [2024-07-15 12:26:49.552432] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.647 [2024-07-15 12:26:49.552455] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.647 [2024-07-15 12:26:49.552475] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.647 #26 NEW cov: 10990 ft: 17193 corp: 7/25b lim: 4 exec/s: 26 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:09:54.647 [2024-07-15 12:26:49.741587] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.647 [2024-07-15 12:26:49.741610] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.647 [2024-07-15 12:26:49.741628] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.906 #27 NEW cov: 10990 ft: 17609 corp: 8/29b lim: 4 exec/s: 27 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:09:54.906 [2024-07-15 12:26:49.942134] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.906 [2024-07-15 12:26:49.942160] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.906 [2024-07-15 12:26:49.942177] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:55.164 #28 NEW cov: 10997 ft: 17825 corp: 9/33b lim: 4 exec/s: 28 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:55.164 [2024-07-15 12:26:50.134784] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:55.164 [2024-07-15 12:26:50.134824] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:55.165 [2024-07-15 12:26:50.134843] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:55.165 #29 NEW cov: 10997 ft: 18000 corp: 10/37b lim: 4 exec/s: 14 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:55.165 #29 DONE cov: 10997 ft: 18000 corp: 10/37b lim: 4 exec/s: 14 rss: 74Mb 00:09:55.165 Done 29 runs in 2 second(s) 00:09:55.165 [2024-07-15 12:26:50.260744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:55.424 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:55.424 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:55.682 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:55.682 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:55.682 12:26:50 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:55.682 [2024-07-15 12:26:50.582936] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:55.682 [2024-07-15 12:26:50.583021] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167825 ] 00:09:55.682 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.682 [2024-07-15 12:26:50.665928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.682 [2024-07-15 12:26:50.754508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.940 INFO: Running with entropic power schedule (0xFF, 100). 00:09:55.940 INFO: Seed: 1038557878 00:09:55.940 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:09:55.940 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:09:55.940 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:55.940 INFO: A corpus is not provided, starting from an empty corpus 00:09:55.940 #2 INITED exec/s: 0 rss: 66Mb 00:09:55.940 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:55.940 This may also happen if the target rejected all inputs we tried so far 00:09:55.940 [2024-07-15 12:26:51.022990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:56.198 [2024-07-15 12:26:51.074500] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:56.457 NEW_FUNC[1/659]: 0x484820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:56.457 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:56.457 #32 NEW cov: 10939 ft: 10911 corp: 2/9b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 5 CrossOver-ChangeBit-CopyPart-InsertRepeatedBytes-CrossOver- 00:09:56.457 [2024-07-15 12:26:51.555025] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:56.716 #33 NEW cov: 10953 ft: 14734 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:56.716 [2024-07-15 12:26:51.741568] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:56.974 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:56.974 #34 NEW cov: 10970 ft: 15312 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:56.974 [2024-07-15 12:26:51.935847] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:56.974 #35 NEW cov: 10973 ft: 15975 corp: 5/33b lim: 8 exec/s: 35 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:57.234 [2024-07-15 12:26:52.116299] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.234 #36 NEW cov: 10973 ft: 16339 corp: 6/41b lim: 8 exec/s: 36 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:09:57.234 [2024-07-15 12:26:52.296888] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.492 #37 NEW cov: 10973 ft: 16938 corp: 7/49b lim: 8 exec/s: 37 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:09:57.493 [2024-07-15 12:26:52.479171] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.493 #38 NEW cov: 10973 ft: 17216 corp: 8/57b lim: 8 exec/s: 38 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:09:57.751 [2024-07-15 12:26:52.661028] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.751 #39 NEW cov: 10980 ft: 17249 corp: 9/65b lim: 8 exec/s: 39 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:09:57.751 [2024-07-15 12:26:52.855375] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.010 #45 NEW cov: 10980 ft: 17597 corp: 10/73b lim: 8 exec/s: 45 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:09:58.010 [2024-07-15 12:26:53.034298] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.268 #46 NEW cov: 10980 ft: 17703 corp: 11/81b lim: 8 exec/s: 23 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:58.268 #46 DONE cov: 10980 ft: 17703 corp: 11/81b lim: 8 exec/s: 23 rss: 74Mb 00:09:58.268 Done 46 runs in 2 second(s) 00:09:58.268 [2024-07-15 12:26:53.165735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:09:58.527 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:58.527 12:26:53 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:09:58.528 [2024-07-15 12:26:53.484248] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:09:58.528 [2024-07-15 12:26:53.484324] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168192 ] 00:09:58.528 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.528 [2024-07-15 12:26:53.561672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.528 [2024-07-15 12:26:53.642777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.786 INFO: Running with entropic power schedule (0xFF, 100). 00:09:58.786 INFO: Seed: 3910576788 00:09:58.786 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:09:58.786 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:09:58.786 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:58.786 INFO: A corpus is not provided, starting from an empty corpus 00:09:58.786 #2 INITED exec/s: 0 rss: 66Mb 00:09:58.786 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:58.786 This may also happen if the target rejected all inputs we tried so far 00:09:58.786 [2024-07-15 12:26:53.882395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:09:59.301 NEW_FUNC[1/658]: 0x484f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:09:59.301 NEW_FUNC[2/658]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:59.301 #34 NEW cov: 10938 ft: 10912 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertByte- 00:09:59.558 NEW_FUNC[1/1]: 0x112f930 in nvmf_ctrlr_get_ana_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2292 00:09:59.558 #35 NEW cov: 10964 ft: 14367 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:09:59.814 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:59.814 #50 NEW cov: 10981 ft: 14563 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 EraseBytes-ChangeBit-InsertRepeatedBytes-ShuffleBytes-CopyPart- 00:09:59.814 #61 NEW cov: 10981 ft: 15236 corp: 5/129b lim: 32 exec/s: 61 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:10:00.127 #62 NEW cov: 10981 ft: 16202 corp: 6/161b lim: 32 exec/s: 62 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:10:00.412 #63 NEW cov: 10981 ft: 16435 corp: 7/193b lim: 32 exec/s: 63 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:10:00.412 #64 NEW cov: 10981 ft: 16696 corp: 8/225b lim: 32 exec/s: 64 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:10:00.670 #70 NEW cov: 10988 ft: 16808 corp: 9/257b lim: 32 exec/s: 70 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:10:00.927 #76 NEW cov: 10988 ft: 16916 corp: 10/289b lim: 32 exec/s: 76 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:10:00.927 #77 NEW cov: 10988 ft: 17024 corp: 11/321b lim: 32 exec/s: 38 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:10:00.927 #77 DONE cov: 10988 ft: 17024 corp: 11/321b lim: 32 exec/s: 38 rss: 74Mb 00:10:00.927 Done 77 runs in 2 second(s) 00:10:00.927 [2024-07-15 12:26:56.038739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:01.184 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:10:01.185 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:01.185 12:26:56 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:10:01.442 [2024-07-15 12:26:56.336740] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:10:01.442 [2024-07-15 12:26:56.336816] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168558 ] 00:10:01.442 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.442 [2024-07-15 12:26:56.415350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.442 [2024-07-15 12:26:56.499690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.700 INFO: Running with entropic power schedule (0xFF, 100). 00:10:01.700 INFO: Seed: 2486601477 00:10:01.700 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:10:01.700 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:10:01.700 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:01.700 INFO: A corpus is not provided, starting from an empty corpus 00:10:01.700 #2 INITED exec/s: 0 rss: 66Mb 00:10:01.700 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:01.700 This may also happen if the target rejected all inputs we tried so far 00:10:01.700 [2024-07-15 12:26:56.765653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:10:02.216 NEW_FUNC[1/659]: 0x485780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:10:02.216 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:02.216 #11 NEW cov: 10946 ft: 10731 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 CrossOver-InsertRepeatedBytes-ShuffleBytes-InsertByte- 00:10:02.474 #22 NEW cov: 10960 ft: 13604 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:02.474 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:02.474 #23 NEW cov: 10980 ft: 14807 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:10:02.733 #24 NEW cov: 10980 ft: 15885 corp: 5/129b lim: 32 exec/s: 24 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:02.991 #25 NEW cov: 10980 ft: 16627 corp: 6/161b lim: 32 exec/s: 25 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:10:03.249 #31 NEW cov: 10980 ft: 17561 corp: 7/193b lim: 32 exec/s: 31 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:10:03.249 #32 NEW cov: 10980 ft: 17963 corp: 8/225b lim: 32 exec/s: 32 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:10:03.508 #38 NEW cov: 10987 ft: 18146 corp: 9/257b lim: 32 exec/s: 38 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:03.767 #39 NEW cov: 10987 ft: 18284 corp: 10/289b lim: 32 exec/s: 19 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:10:03.767 #39 DONE cov: 10987 ft: 18284 corp: 10/289b lim: 32 exec/s: 19 rss: 75Mb 00:10:03.767 Done 39 runs in 2 second(s) 00:10:03.767 [2024-07-15 12:26:58.762722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:10:04.025 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:10:04.026 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:04.026 12:26:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:10:04.026 [2024-07-15 12:26:59.070755] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:10:04.026 [2024-07-15 12:26:59.070834] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168916 ] 00:10:04.026 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.026 [2024-07-15 12:26:59.149076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.285 [2024-07-15 12:26:59.236041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.544 INFO: Running with entropic power schedule (0xFF, 100). 00:10:04.544 INFO: Seed: 936628838 00:10:04.544 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:10:04.544 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:10:04.544 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:04.544 INFO: A corpus is not provided, starting from an empty corpus 00:10:04.544 #2 INITED exec/s: 0 rss: 66Mb 00:10:04.544 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:04.544 This may also happen if the target rejected all inputs we tried so far 00:10:04.544 [2024-07-15 12:26:59.505920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:10:04.544 [2024-07-15 12:26:59.565572] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:04.544 [2024-07-15 12:26:59.565612] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:05.059 NEW_FUNC[1/660]: 0x486180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:10:05.059 NEW_FUNC[2/660]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:05.059 #109 NEW cov: 10961 ft: 10929 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:05.059 [2024-07-15 12:27:00.033228] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:05.059 [2024-07-15 12:27:00.033279] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:05.059 #120 NEW cov: 10975 ft: 13982 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 CrossOver- 00:10:05.317 [2024-07-15 12:27:00.218442] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:05.317 [2024-07-15 12:27:00.218486] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:05.317 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:05.317 #131 NEW cov: 10992 ft: 14572 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:05.317 [2024-07-15 12:27:00.399222] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:05.317 [2024-07-15 12:27:00.399256] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:05.575 #132 NEW cov: 10992 ft: 15049 corp: 5/53b lim: 13 exec/s: 132 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:10:05.575 [2024-07-15 12:27:00.583821] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:05.575 [2024-07-15 12:27:00.583853] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:05.575 #133 NEW cov: 10992 ft: 15152 corp: 6/66b lim: 13 exec/s: 133 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:10:05.833 [2024-07-15 12:27:00.769364] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:05.833 [2024-07-15 12:27:00.769399] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:05.833 #135 NEW cov: 10992 ft: 16362 corp: 7/79b lim: 13 exec/s: 135 rss: 74Mb L: 13/13 MS: 2 EraseBytes-CrossOver- 00:10:06.091 [2024-07-15 12:27:00.964780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:06.091 [2024-07-15 12:27:00.964812] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:06.091 #136 NEW cov: 10992 ft: 16840 corp: 8/92b lim: 13 exec/s: 136 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:10:06.091 [2024-07-15 12:27:01.161469] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:06.091 [2024-07-15 12:27:01.161501] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:06.350 #137 NEW cov: 10999 ft: 17265 corp: 9/105b lim: 13 exec/s: 137 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:06.350 [2024-07-15 12:27:01.349663] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:06.350 [2024-07-15 12:27:01.349695] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:06.350 #138 NEW cov: 10999 ft: 17316 corp: 10/118b lim: 13 exec/s: 138 rss: 74Mb L: 13/13 MS: 1 CrossOver- 00:10:06.609 [2024-07-15 12:27:01.538180] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:06.609 [2024-07-15 12:27:01.538215] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:06.609 #139 NEW cov: 10999 ft: 17724 corp: 11/131b lim: 13 exec/s: 69 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:10:06.609 #139 DONE cov: 10999 ft: 17724 corp: 11/131b lim: 13 exec/s: 69 rss: 74Mb 00:10:06.609 Done 139 runs in 2 second(s) 00:10:06.609 [2024-07-15 12:27:01.667744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:10:06.868 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:10:06.868 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:06.868 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:06.868 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:10:06.868 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:10:06.869 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:06.869 12:27:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:10:06.869 [2024-07-15 12:27:01.987716] Starting SPDK v24.09-pre git sha1 dff473c1d / DPDK 24.03.0 initialization... 00:10:06.869 [2024-07-15 12:27:01.987799] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169398 ] 00:10:07.128 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.128 [2024-07-15 12:27:02.070022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.128 [2024-07-15 12:27:02.157855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.387 INFO: Running with entropic power schedule (0xFF, 100). 00:10:07.387 INFO: Seed: 3849646340 00:10:07.387 INFO: Loaded 1 modules (355051 inline 8-bit counters): 355051 [0x296c90c, 0x29c33f7), 00:10:07.387 INFO: Loaded 1 PC tables (355051 PCs): 355051 [0x29c33f8,0x2f2e2a8), 00:10:07.387 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:07.387 INFO: A corpus is not provided, starting from an empty corpus 00:10:07.387 #2 INITED exec/s: 0 rss: 66Mb 00:10:07.387 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:07.387 This may also happen if the target rejected all inputs we tried so far 00:10:07.387 [2024-07-15 12:27:02.413226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:10:07.387 [2024-07-15 12:27:02.464573] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:07.387 [2024-07-15 12:27:02.464632] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:07.906 NEW_FUNC[1/659]: 0x486e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:10:07.906 NEW_FUNC[2/659]: 0x4893b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:07.906 #38 NEW cov: 10941 ft: 10809 corp: 2/10b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:10:07.906 [2024-07-15 12:27:02.941524] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:07.906 [2024-07-15 12:27:02.941579] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:08.165 NEW_FUNC[1/1]: 0x12be280 in nvmf_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/nvmf.c:150 00:10:08.165 #40 NEW cov: 10967 ft: 13239 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 InsertByte-InsertRepeatedBytes- 00:10:08.165 [2024-07-15 12:27:03.138856] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:08.165 [2024-07-15 12:27:03.138890] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:08.165 NEW_FUNC[1/1]: 0x1a4a740 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:08.165 #51 NEW cov: 10984 ft: 15084 corp: 4/28b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:10:08.423 [2024-07-15 12:27:03.336729] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:08.423 [2024-07-15 12:27:03.336760] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:08.423 #53 NEW cov: 10984 ft: 15358 corp: 5/37b lim: 9 exec/s: 53 rss: 74Mb L: 9/9 MS: 2 EraseBytes-CopyPart- 00:10:08.424 [2024-07-15 12:27:03.526765] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:08.424 [2024-07-15 12:27:03.526797] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:08.682 #59 NEW cov: 10984 ft: 16399 corp: 6/46b lim: 9 exec/s: 59 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:10:08.682 [2024-07-15 12:27:03.708294] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:08.682 [2024-07-15 12:27:03.708325] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:08.940 #60 NEW cov: 10984 ft: 16513 corp: 7/55b lim: 9 exec/s: 60 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:10:08.940 [2024-07-15 12:27:03.889192] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:08.940 [2024-07-15 12:27:03.889223] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:08.940 #71 NEW cov: 10984 ft: 16836 corp: 8/64b lim: 9 exec/s: 71 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:10:09.199 [2024-07-15 12:27:04.081056] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:09.199 [2024-07-15 12:27:04.081087] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:09.199 #72 NEW cov: 10991 ft: 17017 corp: 9/73b lim: 9 exec/s: 72 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:10:09.199 [2024-07-15 12:27:04.260926] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:09.199 [2024-07-15 12:27:04.260956] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:09.457 #76 NEW cov: 10991 ft: 17049 corp: 10/82b lim: 9 exec/s: 76 rss: 75Mb L: 9/9 MS: 4 EraseBytes-EraseBytes-CMP-InsertRepeatedBytes- DE: "\011\000"- 00:10:09.457 [2024-07-15 12:27:04.440503] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:09.457 [2024-07-15 12:27:04.440540] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:09.457 #77 NEW cov: 10991 ft: 17362 corp: 11/91b lim: 9 exec/s: 38 rss: 75Mb L: 9/9 MS: 1 ShuffleBytes- 00:10:09.457 #77 DONE cov: 10991 ft: 17362 corp: 11/91b lim: 9 exec/s: 38 rss: 75Mb 00:10:09.457 ###### Recommended dictionary. ###### 00:10:09.457 "\011\000" # Uses: 0 00:10:09.457 ###### End of recommended dictionary. ###### 00:10:09.457 Done 77 runs in 2 second(s) 00:10:09.457 [2024-07-15 12:27:04.569746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:10:09.716 12:27:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:10:09.976 12:27:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:09.976 12:27:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:09.976 12:27:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:10:09.976 00:10:09.976 real 0m20.235s 00:10:09.976 user 0m28.045s 00:10:09.976 sys 0m2.005s 00:10:09.976 12:27:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.976 12:27:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:09.976 ************************************ 00:10:09.976 END TEST vfio_llvm_fuzz 00:10:09.976 ************************************ 00:10:09.976 12:27:04 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:10:09.976 12:27:04 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:10:09.976 00:10:09.976 real 1m26.332s 00:10:09.976 user 2m9.178s 00:10:09.976 sys 0m10.338s 00:10:09.976 12:27:04 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.976 12:27:04 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:09.976 ************************************ 00:10:09.976 END TEST llvm_fuzz 00:10:09.976 ************************************ 00:10:09.976 12:27:04 -- common/autotest_common.sh@1142 -- # return 0 00:10:09.976 12:27:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:10:09.976 12:27:04 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:10:09.976 12:27:04 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:10:09.976 12:27:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.976 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:10:09.976 12:27:04 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:10:09.976 12:27:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:10:09.976 12:27:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:10:09.976 12:27:04 -- common/autotest_common.sh@10 -- # set +x 00:10:14.167 INFO: APP EXITING 00:10:14.167 INFO: killing all VMs 00:10:14.167 INFO: killing vhost app 00:10:14.167 INFO: EXIT DONE 00:10:17.508 Waiting for block devices as requested 00:10:17.508 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:10:17.508 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:17.508 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:17.767 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:17.767 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:17.767 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:17.767 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:18.026 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:18.026 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:18.026 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:18.288 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:18.288 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:18.288 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:18.545 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:18.545 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:18.545 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:18.545 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:23.814 Cleaning 00:10:23.814 Removing: /dev/shm/spdk_tgt_trace.pid4141194 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4138887 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4140010 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4141194 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4141750 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4142541 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4142781 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4143968 00:10:23.814 Removing: /var/run/dpdk/spdk_pid4144111 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4144416 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4144765 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4145033 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4145289 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4145521 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4145722 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4145919 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4146137 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4146888 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4149228 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4149605 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4149813 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4149937 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4150386 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4150395 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4150952 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4150964 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4151316 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4151354 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4151558 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4151731 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4152052 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4152235 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4152423 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4152642 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4152859 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4152954 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4153110 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4153311 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4153505 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4153699 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4153898 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4154089 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4154292 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4154532 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4154761 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4154999 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4155228 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4155427 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4155625 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4155819 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4156018 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4156214 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4156411 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4156609 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4156814 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4157008 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4157203 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4157432 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4157690 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4158233 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4158594 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4158950 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4159307 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4159622 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4159913 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4160220 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4160580 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4160942 00:10:24.073 Removing: /var/run/dpdk/spdk_pid4161301 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4161658 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4162011 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4162373 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4162726 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4163051 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4163337 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4163647 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4164001 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4164360 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4164722 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4165077 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4165439 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4165794 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4166193 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4166592 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4167108 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4167470 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4167825 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4168192 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4168558 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4168916 00:10:24.332 Removing: /var/run/dpdk/spdk_pid4169398 00:10:24.332 Clean 00:10:24.332 12:27:19 -- common/autotest_common.sh@1451 -- # return 0 00:10:24.332 12:27:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:10:24.332 12:27:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.332 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 12:27:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:10:24.332 12:27:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.332 12:27:19 -- common/autotest_common.sh@10 -- # set +x 00:10:24.332 12:27:19 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:24.332 12:27:19 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:10:24.332 12:27:19 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:10:24.332 12:27:19 -- spdk/autotest.sh@391 -- # hash lcov 00:10:24.332 12:27:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:10:24.591 12:27:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:24.591 12:27:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:24.591 12:27:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.591 12:27:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.591 12:27:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.591 12:27:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.591 12:27:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.591 12:27:19 -- paths/export.sh@5 -- $ export PATH 00:10:24.591 12:27:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.591 12:27:19 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:24.591 12:27:19 -- common/autobuild_common.sh@444 -- $ date +%s 00:10:24.591 12:27:19 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721039239.XXXXXX 00:10:24.591 12:27:19 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721039239.hkVIBe 00:10:24.591 12:27:19 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:10:24.591 12:27:19 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:10:24.591 12:27:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:10:24.591 12:27:19 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:24.591 12:27:19 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:24.591 12:27:19 -- common/autobuild_common.sh@460 -- $ get_config_params 00:10:24.591 12:27:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:10:24.591 12:27:19 -- common/autotest_common.sh@10 -- $ set +x 00:10:24.591 12:27:19 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:10:24.591 12:27:19 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:10:24.591 12:27:19 -- pm/common@17 -- $ local monitor 00:10:24.591 12:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.591 12:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.591 12:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.591 12:27:19 -- pm/common@21 -- $ date +%s 00:10:24.591 12:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:24.591 12:27:19 -- pm/common@21 -- $ date +%s 00:10:24.591 12:27:19 -- pm/common@21 -- $ date +%s 00:10:24.591 12:27:19 -- pm/common@25 -- $ sleep 1 00:10:24.591 12:27:19 -- pm/common@21 -- $ date +%s 00:10:24.591 12:27:19 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039239 00:10:24.591 12:27:19 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039239 00:10:24.591 12:27:19 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039239 00:10:24.591 12:27:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039239 00:10:24.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039239_collect-vmstat.pm.log 00:10:24.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039239_collect-cpu-temp.pm.log 00:10:24.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039239_collect-cpu-load.pm.log 00:10:24.591 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039239_collect-bmc-pm.bmc.pm.log 00:10:25.525 12:27:20 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:10:25.525 12:27:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:10:25.525 12:27:20 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:25.525 12:27:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:10:25.525 12:27:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:10:25.525 12:27:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:10:25.525 12:27:20 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:25.525 12:27:20 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:10:25.525 12:27:20 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:25.525 12:27:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:10:25.525 12:27:20 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:10:25.525 12:27:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:25.525 12:27:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:25.525 12:27:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:25.525 12:27:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:25.525 12:27:20 -- pm/common@44 -- $ pid=4175609 00:10:25.525 12:27:20 -- pm/common@50 -- $ kill -TERM 4175609 00:10:25.525 12:27:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:25.525 12:27:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:25.525 12:27:20 -- pm/common@44 -- $ pid=4175611 00:10:25.525 12:27:20 -- pm/common@50 -- $ kill -TERM 4175611 00:10:25.525 12:27:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:25.525 12:27:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:25.525 12:27:20 -- pm/common@44 -- $ pid=4175613 00:10:25.525 12:27:20 -- pm/common@50 -- $ kill -TERM 4175613 00:10:25.525 12:27:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:25.525 12:27:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:25.525 12:27:20 -- pm/common@44 -- $ pid=4175653 00:10:25.525 12:27:20 -- pm/common@50 -- $ sudo -E kill -TERM 4175653 00:10:25.784 + [[ -n 4035598 ]] 00:10:25.784 + sudo kill 4035598 00:10:25.792 [Pipeline] } 00:10:25.810 [Pipeline] // stage 00:10:25.815 [Pipeline] } 00:10:25.832 [Pipeline] // timeout 00:10:25.838 [Pipeline] } 00:10:25.855 [Pipeline] // catchError 00:10:25.859 [Pipeline] } 00:10:25.877 [Pipeline] // wrap 00:10:25.883 [Pipeline] } 00:10:25.899 [Pipeline] // catchError 00:10:25.908 [Pipeline] stage 00:10:25.910 [Pipeline] { (Epilogue) 00:10:25.924 [Pipeline] catchError 00:10:25.926 [Pipeline] { 00:10:25.941 [Pipeline] echo 00:10:25.943 Cleanup processes 00:10:25.949 [Pipeline] sh 00:10:26.231 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:26.231 4093228 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721038841 00:10:26.231 4093259 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721038841 00:10:26.231 4175779 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:10:26.231 4176509 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:26.246 [Pipeline] sh 00:10:26.526 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:26.526 ++ grep -v 'sudo pgrep' 00:10:26.527 ++ awk '{print $1}' 00:10:26.527 + sudo kill -9 4175779 00:10:26.537 [Pipeline] sh 00:10:26.816 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:27.761 [Pipeline] sh 00:10:28.042 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:28.042 Artifacts sizes are good 00:10:28.057 [Pipeline] archiveArtifacts 00:10:28.064 Archiving artifacts 00:10:28.145 [Pipeline] sh 00:10:28.427 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:28.440 [Pipeline] cleanWs 00:10:28.449 [WS-CLEANUP] Deleting project workspace... 00:10:28.449 [WS-CLEANUP] Deferred wipeout is used... 00:10:28.456 [WS-CLEANUP] done 00:10:28.458 [Pipeline] } 00:10:28.479 [Pipeline] // catchError 00:10:28.492 [Pipeline] sh 00:10:28.768 + logger -p user.info -t JENKINS-CI 00:10:28.778 [Pipeline] } 00:10:28.795 [Pipeline] // stage 00:10:28.801 [Pipeline] } 00:10:28.818 [Pipeline] // node 00:10:28.824 [Pipeline] End of Pipeline 00:10:28.858 Finished: SUCCESS