00:00:00.001 Started by upstream project "autotest-nightly" build number 3706 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3087 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.048 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.138 Using shallow fetch with depth 1 00:00:00.138 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.138 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.210 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.210 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.247 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.257 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.267 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.267 > git config core.sparsecheckout # timeout=10 00:00:04.277 > git read-tree -mu HEAD # timeout=10 00:00:04.292 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.306 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.306 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:04.377 [Pipeline] Start of Pipeline 00:00:04.390 [Pipeline] library 00:00:04.392 Loading library shm_lib@master 00:00:04.392 Library shm_lib@master is cached. Copying from home. 00:00:04.409 [Pipeline] node 00:00:04.423 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:04.425 [Pipeline] { 00:00:04.433 [Pipeline] catchError 00:00:04.433 [Pipeline] { 00:00:04.444 [Pipeline] wrap 00:00:04.454 [Pipeline] { 00:00:04.464 [Pipeline] stage 00:00:04.466 [Pipeline] { (Prologue) 00:00:04.685 [Pipeline] sh 00:00:04.977 + logger -p user.info -t JENKINS-CI 00:00:04.996 [Pipeline] echo 00:00:04.997 Node: WFP20 00:00:05.004 [Pipeline] sh 00:00:05.323 [Pipeline] setCustomBuildProperty 00:00:05.333 [Pipeline] echo 00:00:05.334 Cleanup processes 00:00:05.338 [Pipeline] sh 00:00:05.616 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.616 3136254 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.631 [Pipeline] sh 00:00:05.914 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.914 ++ grep -v 'sudo pgrep' 00:00:05.914 ++ awk '{print $1}' 00:00:05.914 + sudo kill -9 00:00:05.914 + true 00:00:05.927 [Pipeline] cleanWs 00:00:05.999 [WS-CLEANUP] Deleting project workspace... 00:00:05.999 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.006 [WS-CLEANUP] done 00:00:06.011 [Pipeline] setCustomBuildProperty 00:00:06.024 [Pipeline] sh 00:00:06.300 + sudo git config --global --replace-all safe.directory '*' 00:00:06.369 [Pipeline] nodesByLabel 00:00:06.371 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.379 [Pipeline] httpRequest 00:00:06.382 HttpMethod: GET 00:00:06.382 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.385 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.395 Response Code: HTTP/1.1 200 OK 00:00:06.395 Success: Status code 200 is in the accepted range: 200,404 00:00:06.395 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.613 [Pipeline] sh 00:00:08.896 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.914 [Pipeline] httpRequest 00:00:08.919 HttpMethod: GET 00:00:08.919 URL: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:08.920 Sending request to url: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:08.933 Response Code: HTTP/1.1 200 OK 00:00:08.934 Success: Status code 200 is in the accepted range: 200,404 00:00:08.934 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:40.602 [Pipeline] sh 00:00:40.885 + tar --no-same-owner -xf spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:43.429 [Pipeline] sh 00:00:43.706 + git -C spdk log --oneline -n5 00:00:43.706 4506c0c36 test/common: Enable inherit_errexit 00:00:43.706 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:43.706 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:43.706 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:43.706 b22f1b34d test/scheduler: Enhance lookup of the $old_cgroup in move_proc() 00:00:43.716 [Pipeline] } 00:00:43.731 [Pipeline] // stage 00:00:43.739 [Pipeline] stage 00:00:43.740 [Pipeline] { (Prepare) 00:00:43.756 [Pipeline] writeFile 00:00:43.771 [Pipeline] sh 00:00:44.051 + logger -p user.info -t JENKINS-CI 00:00:44.070 [Pipeline] sh 00:00:44.352 + logger -p user.info -t JENKINS-CI 00:00:44.363 [Pipeline] sh 00:00:44.645 + cat autorun-spdk.conf 00:00:44.646 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.646 SPDK_TEST_FUZZER_SHORT=1 00:00:44.646 SPDK_TEST_FUZZER=1 00:00:44.646 SPDK_RUN_UBSAN=1 00:00:44.652 RUN_NIGHTLY=1 00:00:44.657 [Pipeline] readFile 00:00:44.679 [Pipeline] withEnv 00:00:44.680 [Pipeline] { 00:00:44.694 [Pipeline] sh 00:00:44.976 + set -ex 00:00:44.976 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:44.976 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:44.976 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.976 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:44.976 ++ SPDK_TEST_FUZZER=1 00:00:44.976 ++ SPDK_RUN_UBSAN=1 00:00:44.976 ++ RUN_NIGHTLY=1 00:00:44.976 + case $SPDK_TEST_NVMF_NICS in 00:00:44.976 + DRIVERS= 00:00:44.976 + [[ -n '' ]] 00:00:44.976 + exit 0 00:00:44.985 [Pipeline] } 00:00:45.002 [Pipeline] // withEnv 00:00:45.007 [Pipeline] } 00:00:45.022 [Pipeline] // stage 00:00:45.032 [Pipeline] catchError 00:00:45.034 [Pipeline] { 00:00:45.048 [Pipeline] timeout 00:00:45.049 Timeout set to expire in 30 min 00:00:45.050 [Pipeline] { 00:00:45.065 [Pipeline] stage 00:00:45.067 [Pipeline] { (Tests) 00:00:45.082 [Pipeline] sh 00:00:45.363 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:45.363 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:45.363 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:45.363 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:45.363 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:45.363 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:45.363 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:45.363 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:45.363 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:45.363 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:45.363 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:45.363 + source /etc/os-release 00:00:45.363 ++ NAME='Fedora Linux' 00:00:45.363 ++ VERSION='38 (Cloud Edition)' 00:00:45.363 ++ ID=fedora 00:00:45.363 ++ VERSION_ID=38 00:00:45.363 ++ VERSION_CODENAME= 00:00:45.363 ++ PLATFORM_ID=platform:f38 00:00:45.363 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.363 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.363 ++ LOGO=fedora-logo-icon 00:00:45.363 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.363 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.363 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.363 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.363 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.363 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.363 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.363 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.363 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.363 ++ SUPPORT_END=2024-05-14 00:00:45.363 ++ VARIANT='Cloud Edition' 00:00:45.363 ++ VARIANT_ID=cloud 00:00:45.363 + uname -a 00:00:45.363 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.364 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:48.653 Hugepages 00:00:48.653 node hugesize free / total 00:00:48.653 node0 1048576kB 0 / 0 00:00:48.653 node0 2048kB 0 / 0 00:00:48.653 node1 1048576kB 0 / 0 00:00:48.653 node1 2048kB 0 / 0 00:00:48.653 00:00:48.653 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.653 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:48.653 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:48.653 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:48.653 + rm -f /tmp/spdk-ld-path 00:00:48.653 + source autorun-spdk.conf 00:00:48.653 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.653 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:48.653 ++ SPDK_TEST_FUZZER=1 00:00:48.653 ++ SPDK_RUN_UBSAN=1 00:00:48.653 ++ RUN_NIGHTLY=1 00:00:48.653 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.653 + [[ -n '' ]] 00:00:48.653 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:48.653 + for M in /var/spdk/build-*-manifest.txt 00:00:48.653 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.653 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:48.653 + for M in /var/spdk/build-*-manifest.txt 00:00:48.653 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.653 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:48.653 ++ uname 00:00:48.653 + [[ Linux == \L\i\n\u\x ]] 00:00:48.653 + sudo dmesg -T 00:00:48.653 + sudo dmesg --clear 00:00:48.653 + dmesg_pid=3137141 00:00:48.653 + [[ Fedora Linux == FreeBSD ]] 00:00:48.653 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.653 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.653 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.654 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.654 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.654 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.654 + sudo dmesg -Tw 00:00:48.654 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.654 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.654 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.654 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.654 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.654 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.654 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.654 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.654 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:48.654 Test configuration: 00:00:48.654 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.654 SPDK_TEST_FUZZER_SHORT=1 00:00:48.654 SPDK_TEST_FUZZER=1 00:00:48.654 SPDK_RUN_UBSAN=1 00:00:48.654 RUN_NIGHTLY=1 05:25:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:48.654 05:25:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.654 05:25:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.654 05:25:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.654 05:25:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.654 05:25:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.654 05:25:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.654 05:25:38 -- paths/export.sh@5 -- $ export PATH 00:00:48.654 05:25:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.654 05:25:38 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:48.654 05:25:38 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:48.654 05:25:38 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715743538.XXXXXX 00:00:48.654 05:25:38 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715743538.CIxCOQ 00:00:48.654 05:25:38 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:48.654 05:25:38 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:48.654 05:25:38 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:48.654 05:25:38 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.654 05:25:38 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.654 05:25:38 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:48.654 05:25:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:48.654 05:25:38 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.654 05:25:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.654 05:25:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:48.654 05:25:38 -- pm/common@17 -- $ local monitor 00:00:48.654 05:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.654 05:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.654 05:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.654 05:25:38 -- pm/common@21 -- $ date +%s 00:00:48.654 05:25:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.654 05:25:38 -- pm/common@21 -- $ date +%s 00:00:48.654 05:25:38 -- pm/common@25 -- $ sleep 1 00:00:48.654 05:25:38 -- pm/common@21 -- $ date +%s 00:00:48.654 05:25:38 -- pm/common@21 -- $ date +%s 00:00:48.654 05:25:38 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715743538 00:00:48.654 05:25:38 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715743538 00:00:48.654 05:25:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715743538 00:00:48.654 05:25:38 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715743538 00:00:48.654 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715743538_collect-vmstat.pm.log 00:00:48.654 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715743538_collect-cpu-load.pm.log 00:00:48.654 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715743538_collect-cpu-temp.pm.log 00:00:48.654 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715743538_collect-bmc-pm.bmc.pm.log 00:00:49.591 05:25:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:49.591 05:25:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.591 05:25:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.591 05:25:39 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:49.591 05:25:39 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.591 Wed May 15 03:25:39 AM UTC 2024 00:00:49.591 05:25:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.591 v24.05-pre-658-g4506c0c36 00:00:49.591 05:25:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.591 05:25:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.591 05:25:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.591 05:25:39 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:49.591 05:25:39 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:49.591 05:25:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.591 ************************************ 00:00:49.591 START TEST ubsan 00:00:49.591 ************************************ 00:00:49.591 05:25:39 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:49.591 using ubsan 00:00:49.591 00:00:49.591 real 0m0.001s 00:00:49.591 user 0m0.000s 00:00:49.591 sys 0m0.000s 00:00:49.591 05:25:39 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:49.591 05:25:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.591 ************************************ 00:00:49.591 END TEST ubsan 00:00:49.591 ************************************ 00:00:49.849 05:25:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.849 05:25:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.849 05:25:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.849 05:25:39 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:49.850 05:25:39 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:49.850 05:25:39 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:49.850 05:25:39 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:00:49.850 05:25:39 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:49.850 05:25:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.850 ************************************ 00:00:49.850 START TEST autobuild_llvm_precompile 00:00:49.850 ************************************ 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ _llvm_precompile 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:49.850 Target: x86_64-redhat-linux-gnu 00:00:49.850 Thread model: posix 00:00:49.850 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:49.850 05:25:39 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:50.108 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:50.108 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:50.675 Using 'verbs' RDMA provider 00:01:06.525 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:21.406 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:21.406 Creating mk/config.mk...done. 00:01:21.407 Creating mk/cc.flags.mk...done. 00:01:21.407 Type 'make' to build. 00:01:21.407 00:01:21.407 real 0m29.618s 00:01:21.407 user 0m12.643s 00:01:21.407 sys 0m16.333s 00:01:21.407 05:26:09 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:21.407 05:26:09 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:21.407 ************************************ 00:01:21.407 END TEST autobuild_llvm_precompile 00:01:21.407 ************************************ 00:01:21.407 05:26:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.407 05:26:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.407 05:26:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.407 05:26:09 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:21.407 05:26:09 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:21.407 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:21.407 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:21.407 Using 'verbs' RDMA provider 00:01:33.630 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:45.845 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:45.845 Creating mk/config.mk...done. 00:01:45.845 Creating mk/cc.flags.mk...done. 00:01:45.845 Type 'make' to build. 00:01:45.846 05:26:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:45.846 05:26:34 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:45.846 05:26:34 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:45.846 05:26:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.846 ************************************ 00:01:45.846 START TEST make 00:01:45.846 ************************************ 00:01:45.846 05:26:34 make -- common/autotest_common.sh@1122 -- $ make -j112 00:01:45.846 make[1]: Nothing to be done for 'all'. 00:01:46.413 The Meson build system 00:01:46.413 Version: 1.3.1 00:01:46.413 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:46.413 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.413 Build type: native build 00:01:46.413 Project name: libvfio-user 00:01:46.413 Project version: 0.0.1 00:01:46.413 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:46.413 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:46.413 Host machine cpu family: x86_64 00:01:46.413 Host machine cpu: x86_64 00:01:46.413 Run-time dependency threads found: YES 00:01:46.413 Library dl found: YES 00:01:46.413 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.413 Run-time dependency json-c found: YES 0.17 00:01:46.413 Run-time dependency cmocka found: YES 1.1.7 00:01:46.413 Program pytest-3 found: NO 00:01:46.413 Program flake8 found: NO 00:01:46.413 Program misspell-fixer found: NO 00:01:46.413 Program restructuredtext-lint found: NO 00:01:46.413 Program valgrind found: YES (/usr/bin/valgrind) 00:01:46.413 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.413 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.413 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.413 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.413 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:46.413 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:46.413 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.413 Build targets in project: 8 00:01:46.413 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:46.413 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:46.413 00:01:46.413 libvfio-user 0.0.1 00:01:46.413 00:01:46.413 User defined options 00:01:46.413 buildtype : debug 00:01:46.413 default_library: static 00:01:46.413 libdir : /usr/local/lib 00:01:46.413 00:01:46.413 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.983 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.983 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:46.983 [2/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:46.983 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:46.983 [4/36] Compiling C object samples/null.p/null.c.o 00:01:46.983 [5/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:46.983 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:46.983 [7/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:46.983 [8/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:46.983 [9/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:46.983 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:46.983 [11/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:46.983 [12/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:46.983 [13/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:46.983 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:46.983 [15/36] Compiling C object samples/server.p/server.c.o 00:01:46.983 [16/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:46.983 [17/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:46.983 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:46.983 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:46.983 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:46.983 [21/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:46.983 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:46.983 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:46.983 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:46.983 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:46.983 [26/36] Compiling C object samples/client.p/client.c.o 00:01:46.983 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:46.983 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:46.983 [29/36] Linking static target lib/libvfio-user.a 00:01:46.983 [30/36] Linking target samples/client 00:01:46.983 [31/36] Linking target samples/gpio-pci-idio-16 00:01:46.983 [32/36] Linking target samples/shadow_ioeventfd_server 00:01:46.983 [33/36] Linking target samples/null 00:01:46.983 [34/36] Linking target test/unit_tests 00:01:46.983 [35/36] Linking target samples/server 00:01:46.983 [36/36] Linking target samples/lspci 00:01:46.983 INFO: autodetecting backend as ninja 00:01:46.983 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.983 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.552 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.552 ninja: no work to do. 00:01:52.837 The Meson build system 00:01:52.837 Version: 1.3.1 00:01:52.837 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:52.837 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:52.837 Build type: native build 00:01:52.837 Program cat found: YES (/usr/bin/cat) 00:01:52.837 Project name: DPDK 00:01:52.837 Project version: 23.11.0 00:01:52.837 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:52.837 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:52.837 Host machine cpu family: x86_64 00:01:52.837 Host machine cpu: x86_64 00:01:52.837 Message: ## Building in Developer Mode ## 00:01:52.837 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.837 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:52.837 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.837 Program python3 found: YES (/usr/bin/python3) 00:01:52.837 Program cat found: YES (/usr/bin/cat) 00:01:52.837 Compiler for C supports arguments -march=native: YES 00:01:52.837 Checking for size of "void *" : 8 00:01:52.837 Checking for size of "void *" : 8 (cached) 00:01:52.837 Library m found: YES 00:01:52.837 Library numa found: YES 00:01:52.837 Has header "numaif.h" : YES 00:01:52.837 Library fdt found: NO 00:01:52.837 Library execinfo found: NO 00:01:52.837 Has header "execinfo.h" : YES 00:01:52.837 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.837 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.837 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.837 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.837 Run-time dependency openssl found: YES 3.0.9 00:01:52.837 Run-time dependency libpcap found: YES 1.10.4 00:01:52.837 Has header "pcap.h" with dependency libpcap: YES 00:01:52.837 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.837 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.837 Compiler for C supports arguments -Wformat: YES 00:01:52.837 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:52.837 Compiler for C supports arguments -Wformat-security: YES 00:01:52.837 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.837 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.837 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.837 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.837 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.837 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.837 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.837 Compiler for C supports arguments -Wundef: YES 00:01:52.837 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.837 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.837 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:52.837 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.837 Program objdump found: YES (/usr/bin/objdump) 00:01:52.837 Compiler for C supports arguments -mavx512f: YES 00:01:52.837 Checking if "AVX512 checking" compiles: YES 00:01:52.837 Fetching value of define "__SSE4_2__" : 1 00:01:52.837 Fetching value of define "__AES__" : 1 00:01:52.837 Fetching value of define "__AVX__" : 1 00:01:52.837 Fetching value of define "__AVX2__" : 1 00:01:52.837 Fetching value of define "__AVX512BW__" : 1 00:01:52.837 Fetching value of define "__AVX512CD__" : 1 00:01:52.837 Fetching value of define "__AVX512DQ__" : 1 00:01:52.837 Fetching value of define "__AVX512F__" : 1 00:01:52.837 Fetching value of define "__AVX512VL__" : 1 00:01:52.837 Fetching value of define "__PCLMUL__" : 1 00:01:52.837 Fetching value of define "__RDRND__" : 1 00:01:52.837 Fetching value of define "__RDSEED__" : 1 00:01:52.837 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:52.837 Fetching value of define "__znver1__" : (undefined) 00:01:52.837 Fetching value of define "__znver2__" : (undefined) 00:01:52.837 Fetching value of define "__znver3__" : (undefined) 00:01:52.837 Fetching value of define "__znver4__" : (undefined) 00:01:52.837 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:52.837 Message: lib/log: Defining dependency "log" 00:01:52.837 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.837 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.837 Checking for function "getentropy" : NO 00:01:52.837 Message: lib/eal: Defining dependency "eal" 00:01:52.837 Message: lib/ring: Defining dependency "ring" 00:01:52.837 Message: lib/rcu: Defining dependency "rcu" 00:01:52.837 Message: lib/mempool: Defining dependency "mempool" 00:01:52.837 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.837 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.837 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.837 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.837 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.837 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:52.837 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:52.837 Compiler for C supports arguments -mpclmul: YES 00:01:52.837 Compiler for C supports arguments -maes: YES 00:01:52.837 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.837 Compiler for C supports arguments -mavx512bw: YES 00:01:52.837 Compiler for C supports arguments -mavx512dq: YES 00:01:52.837 Compiler for C supports arguments -mavx512vl: YES 00:01:52.837 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.837 Compiler for C supports arguments -mavx2: YES 00:01:52.837 Compiler for C supports arguments -mavx: YES 00:01:52.837 Message: lib/net: Defining dependency "net" 00:01:52.837 Message: lib/meter: Defining dependency "meter" 00:01:52.837 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.838 Message: lib/pci: Defining dependency "pci" 00:01:52.838 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.838 Message: lib/hash: Defining dependency "hash" 00:01:52.838 Message: lib/timer: Defining dependency "timer" 00:01:52.838 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.838 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.838 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.838 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.838 Message: lib/power: Defining dependency "power" 00:01:52.838 Message: lib/reorder: Defining dependency "reorder" 00:01:52.838 Message: lib/security: Defining dependency "security" 00:01:52.838 Has header "linux/userfaultfd.h" : YES 00:01:52.838 Has header "linux/vduse.h" : YES 00:01:52.838 Message: lib/vhost: Defining dependency "vhost" 00:01:52.838 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:52.838 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.838 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.838 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.838 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:52.838 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:52.838 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:52.838 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:52.838 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:52.838 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:52.838 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.838 Configuring doxy-api-html.conf using configuration 00:01:52.838 Configuring doxy-api-man.conf using configuration 00:01:52.838 Program mandb found: YES (/usr/bin/mandb) 00:01:52.838 Program sphinx-build found: NO 00:01:52.838 Configuring rte_build_config.h using configuration 00:01:52.838 Message: 00:01:52.838 ================= 00:01:52.838 Applications Enabled 00:01:52.838 ================= 00:01:52.838 00:01:52.838 apps: 00:01:52.838 00:01:52.838 00:01:52.838 Message: 00:01:52.838 ================= 00:01:52.838 Libraries Enabled 00:01:52.838 ================= 00:01:52.838 00:01:52.838 libs: 00:01:52.838 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:52.838 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:52.838 cryptodev, dmadev, power, reorder, security, vhost, 00:01:52.838 00:01:52.838 Message: 00:01:52.838 =============== 00:01:52.838 Drivers Enabled 00:01:52.838 =============== 00:01:52.838 00:01:52.838 common: 00:01:52.838 00:01:52.838 bus: 00:01:52.838 pci, vdev, 00:01:52.838 mempool: 00:01:52.838 ring, 00:01:52.838 dma: 00:01:52.838 00:01:52.838 net: 00:01:52.838 00:01:52.838 crypto: 00:01:52.838 00:01:52.838 compress: 00:01:52.838 00:01:52.838 vdpa: 00:01:52.838 00:01:52.838 00:01:52.838 Message: 00:01:52.838 ================= 00:01:52.838 Content Skipped 00:01:52.838 ================= 00:01:52.838 00:01:52.838 apps: 00:01:52.838 dumpcap: explicitly disabled via build config 00:01:52.838 graph: explicitly disabled via build config 00:01:52.838 pdump: explicitly disabled via build config 00:01:52.838 proc-info: explicitly disabled via build config 00:01:52.838 test-acl: explicitly disabled via build config 00:01:52.838 test-bbdev: explicitly disabled via build config 00:01:52.838 test-cmdline: explicitly disabled via build config 00:01:52.838 test-compress-perf: explicitly disabled via build config 00:01:52.838 test-crypto-perf: explicitly disabled via build config 00:01:52.838 test-dma-perf: explicitly disabled via build config 00:01:52.838 test-eventdev: explicitly disabled via build config 00:01:52.838 test-fib: explicitly disabled via build config 00:01:52.838 test-flow-perf: explicitly disabled via build config 00:01:52.838 test-gpudev: explicitly disabled via build config 00:01:52.838 test-mldev: explicitly disabled via build config 00:01:52.838 test-pipeline: explicitly disabled via build config 00:01:52.838 test-pmd: explicitly disabled via build config 00:01:52.838 test-regex: explicitly disabled via build config 00:01:52.838 test-sad: explicitly disabled via build config 00:01:52.838 test-security-perf: explicitly disabled via build config 00:01:52.838 00:01:52.838 libs: 00:01:52.838 metrics: explicitly disabled via build config 00:01:52.838 acl: explicitly disabled via build config 00:01:52.838 bbdev: explicitly disabled via build config 00:01:52.838 bitratestats: explicitly disabled via build config 00:01:52.838 bpf: explicitly disabled via build config 00:01:52.838 cfgfile: explicitly disabled via build config 00:01:52.838 distributor: explicitly disabled via build config 00:01:52.838 efd: explicitly disabled via build config 00:01:52.838 eventdev: explicitly disabled via build config 00:01:52.838 dispatcher: explicitly disabled via build config 00:01:52.838 gpudev: explicitly disabled via build config 00:01:52.838 gro: explicitly disabled via build config 00:01:52.838 gso: explicitly disabled via build config 00:01:52.838 ip_frag: explicitly disabled via build config 00:01:52.838 jobstats: explicitly disabled via build config 00:01:52.838 latencystats: explicitly disabled via build config 00:01:52.838 lpm: explicitly disabled via build config 00:01:52.838 member: explicitly disabled via build config 00:01:52.838 pcapng: explicitly disabled via build config 00:01:52.838 rawdev: explicitly disabled via build config 00:01:52.838 regexdev: explicitly disabled via build config 00:01:52.838 mldev: explicitly disabled via build config 00:01:52.838 rib: explicitly disabled via build config 00:01:52.838 sched: explicitly disabled via build config 00:01:52.838 stack: explicitly disabled via build config 00:01:52.838 ipsec: explicitly disabled via build config 00:01:52.838 pdcp: explicitly disabled via build config 00:01:52.838 fib: explicitly disabled via build config 00:01:52.838 port: explicitly disabled via build config 00:01:52.838 pdump: explicitly disabled via build config 00:01:52.838 table: explicitly disabled via build config 00:01:52.838 pipeline: explicitly disabled via build config 00:01:52.838 graph: explicitly disabled via build config 00:01:52.838 node: explicitly disabled via build config 00:01:52.838 00:01:52.838 drivers: 00:01:52.838 common/cpt: not in enabled drivers build config 00:01:52.838 common/dpaax: not in enabled drivers build config 00:01:52.838 common/iavf: not in enabled drivers build config 00:01:52.838 common/idpf: not in enabled drivers build config 00:01:52.838 common/mvep: not in enabled drivers build config 00:01:52.838 common/octeontx: not in enabled drivers build config 00:01:52.838 bus/auxiliary: not in enabled drivers build config 00:01:52.838 bus/cdx: not in enabled drivers build config 00:01:52.838 bus/dpaa: not in enabled drivers build config 00:01:52.838 bus/fslmc: not in enabled drivers build config 00:01:52.838 bus/ifpga: not in enabled drivers build config 00:01:52.838 bus/platform: not in enabled drivers build config 00:01:52.838 bus/vmbus: not in enabled drivers build config 00:01:52.838 common/cnxk: not in enabled drivers build config 00:01:52.838 common/mlx5: not in enabled drivers build config 00:01:52.838 common/nfp: not in enabled drivers build config 00:01:52.838 common/qat: not in enabled drivers build config 00:01:52.838 common/sfc_efx: not in enabled drivers build config 00:01:52.838 mempool/bucket: not in enabled drivers build config 00:01:52.838 mempool/cnxk: not in enabled drivers build config 00:01:52.838 mempool/dpaa: not in enabled drivers build config 00:01:52.838 mempool/dpaa2: not in enabled drivers build config 00:01:52.838 mempool/octeontx: not in enabled drivers build config 00:01:52.838 mempool/stack: not in enabled drivers build config 00:01:52.838 dma/cnxk: not in enabled drivers build config 00:01:52.838 dma/dpaa: not in enabled drivers build config 00:01:52.838 dma/dpaa2: not in enabled drivers build config 00:01:52.838 dma/hisilicon: not in enabled drivers build config 00:01:52.838 dma/idxd: not in enabled drivers build config 00:01:52.838 dma/ioat: not in enabled drivers build config 00:01:52.838 dma/skeleton: not in enabled drivers build config 00:01:52.838 net/af_packet: not in enabled drivers build config 00:01:52.838 net/af_xdp: not in enabled drivers build config 00:01:52.838 net/ark: not in enabled drivers build config 00:01:52.838 net/atlantic: not in enabled drivers build config 00:01:52.838 net/avp: not in enabled drivers build config 00:01:52.838 net/axgbe: not in enabled drivers build config 00:01:52.838 net/bnx2x: not in enabled drivers build config 00:01:52.838 net/bnxt: not in enabled drivers build config 00:01:52.838 net/bonding: not in enabled drivers build config 00:01:52.838 net/cnxk: not in enabled drivers build config 00:01:52.838 net/cpfl: not in enabled drivers build config 00:01:52.838 net/cxgbe: not in enabled drivers build config 00:01:52.838 net/dpaa: not in enabled drivers build config 00:01:52.838 net/dpaa2: not in enabled drivers build config 00:01:52.838 net/e1000: not in enabled drivers build config 00:01:52.838 net/ena: not in enabled drivers build config 00:01:52.838 net/enetc: not in enabled drivers build config 00:01:52.838 net/enetfec: not in enabled drivers build config 00:01:52.838 net/enic: not in enabled drivers build config 00:01:52.838 net/failsafe: not in enabled drivers build config 00:01:52.838 net/fm10k: not in enabled drivers build config 00:01:52.838 net/gve: not in enabled drivers build config 00:01:52.838 net/hinic: not in enabled drivers build config 00:01:52.838 net/hns3: not in enabled drivers build config 00:01:52.838 net/i40e: not in enabled drivers build config 00:01:52.838 net/iavf: not in enabled drivers build config 00:01:52.838 net/ice: not in enabled drivers build config 00:01:52.838 net/idpf: not in enabled drivers build config 00:01:52.838 net/igc: not in enabled drivers build config 00:01:52.839 net/ionic: not in enabled drivers build config 00:01:52.839 net/ipn3ke: not in enabled drivers build config 00:01:52.839 net/ixgbe: not in enabled drivers build config 00:01:52.839 net/mana: not in enabled drivers build config 00:01:52.839 net/memif: not in enabled drivers build config 00:01:52.839 net/mlx4: not in enabled drivers build config 00:01:52.839 net/mlx5: not in enabled drivers build config 00:01:52.839 net/mvneta: not in enabled drivers build config 00:01:52.839 net/mvpp2: not in enabled drivers build config 00:01:52.839 net/netvsc: not in enabled drivers build config 00:01:52.839 net/nfb: not in enabled drivers build config 00:01:52.839 net/nfp: not in enabled drivers build config 00:01:52.839 net/ngbe: not in enabled drivers build config 00:01:52.839 net/null: not in enabled drivers build config 00:01:52.839 net/octeontx: not in enabled drivers build config 00:01:52.839 net/octeon_ep: not in enabled drivers build config 00:01:52.839 net/pcap: not in enabled drivers build config 00:01:52.839 net/pfe: not in enabled drivers build config 00:01:52.839 net/qede: not in enabled drivers build config 00:01:52.839 net/ring: not in enabled drivers build config 00:01:52.839 net/sfc: not in enabled drivers build config 00:01:52.839 net/softnic: not in enabled drivers build config 00:01:52.839 net/tap: not in enabled drivers build config 00:01:52.839 net/thunderx: not in enabled drivers build config 00:01:52.839 net/txgbe: not in enabled drivers build config 00:01:52.839 net/vdev_netvsc: not in enabled drivers build config 00:01:52.839 net/vhost: not in enabled drivers build config 00:01:52.839 net/virtio: not in enabled drivers build config 00:01:52.839 net/vmxnet3: not in enabled drivers build config 00:01:52.839 raw/*: missing internal dependency, "rawdev" 00:01:52.839 crypto/armv8: not in enabled drivers build config 00:01:52.839 crypto/bcmfs: not in enabled drivers build config 00:01:52.839 crypto/caam_jr: not in enabled drivers build config 00:01:52.839 crypto/ccp: not in enabled drivers build config 00:01:52.839 crypto/cnxk: not in enabled drivers build config 00:01:52.839 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.839 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.839 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.839 crypto/mlx5: not in enabled drivers build config 00:01:52.839 crypto/mvsam: not in enabled drivers build config 00:01:52.839 crypto/nitrox: not in enabled drivers build config 00:01:52.839 crypto/null: not in enabled drivers build config 00:01:52.839 crypto/octeontx: not in enabled drivers build config 00:01:52.839 crypto/openssl: not in enabled drivers build config 00:01:52.839 crypto/scheduler: not in enabled drivers build config 00:01:52.839 crypto/uadk: not in enabled drivers build config 00:01:52.839 crypto/virtio: not in enabled drivers build config 00:01:52.839 compress/isal: not in enabled drivers build config 00:01:52.839 compress/mlx5: not in enabled drivers build config 00:01:52.839 compress/octeontx: not in enabled drivers build config 00:01:52.839 compress/zlib: not in enabled drivers build config 00:01:52.839 regex/*: missing internal dependency, "regexdev" 00:01:52.839 ml/*: missing internal dependency, "mldev" 00:01:52.839 vdpa/ifc: not in enabled drivers build config 00:01:52.839 vdpa/mlx5: not in enabled drivers build config 00:01:52.839 vdpa/nfp: not in enabled drivers build config 00:01:52.839 vdpa/sfc: not in enabled drivers build config 00:01:52.839 event/*: missing internal dependency, "eventdev" 00:01:52.839 baseband/*: missing internal dependency, "bbdev" 00:01:52.839 gpu/*: missing internal dependency, "gpudev" 00:01:52.839 00:01:52.839 00:01:52.839 Build targets in project: 85 00:01:52.839 00:01:52.839 DPDK 23.11.0 00:01:52.839 00:01:52.839 User defined options 00:01:52.839 buildtype : debug 00:01:52.839 default_library : static 00:01:52.839 libdir : lib 00:01:52.839 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:52.839 c_args : -fPIC -Werror 00:01:52.839 c_link_args : 00:01:52.839 cpu_instruction_set: native 00:01:52.839 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:52.839 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:52.839 enable_docs : false 00:01:52.839 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:52.839 enable_kmods : false 00:01:52.839 tests : false 00:01:52.839 00:01:52.839 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.839 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.839 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.839 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.839 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.839 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.839 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.839 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.839 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.839 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.839 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.839 [10/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.839 [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.839 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.839 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.839 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.101 [15/265] Linking static target lib/librte_kvargs.a 00:01:53.101 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.101 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.101 [18/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.101 [19/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.101 [20/265] Linking static target lib/librte_log.a 00:01:53.101 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.101 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.101 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.101 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.101 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.101 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.101 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.101 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.101 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.101 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.101 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.101 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.101 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.101 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.101 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.101 [36/265] Linking static target lib/librte_pci.a 00:01:53.101 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.101 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.101 [39/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.101 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.101 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.357 [42/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.357 [43/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.357 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.357 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.357 [46/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.357 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.357 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.357 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.357 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.357 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.357 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.357 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.357 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.357 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.357 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.357 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.357 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.357 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.357 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.357 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.357 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.357 [63/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.357 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.357 [65/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.357 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.357 [67/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.357 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.357 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.357 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.357 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.357 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.357 [73/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.357 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.357 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.357 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.614 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.614 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.614 [79/265] Linking static target lib/librte_telemetry.a 00:01:53.614 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.614 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.614 [82/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.614 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.614 [84/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.614 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.615 [86/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.615 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.615 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.615 [89/265] Linking static target lib/librte_meter.a 00:01:53.615 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.615 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.615 [92/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.615 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.615 [94/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.615 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.615 [96/265] Linking static target lib/librte_ring.a 00:01:53.615 [97/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.615 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.615 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.615 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.615 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.615 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.615 [103/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.615 [104/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.615 [105/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.615 [106/265] Linking static target lib/librte_timer.a 00:01:53.615 [107/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.615 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.615 [109/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.615 [110/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.615 [111/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.615 [112/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.615 [113/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.615 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.615 [115/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.615 [116/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.615 [117/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.615 [118/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.615 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.615 [120/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.615 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.615 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.615 [123/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.615 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.615 [125/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.615 [126/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.615 [127/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.615 [128/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.615 [129/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.615 [130/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.615 [131/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.615 [132/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.615 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.615 [134/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.615 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.615 [136/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.615 [137/265] Linking static target lib/librte_eal.a 00:01:53.615 [138/265] Linking static target lib/librte_cmdline.a 00:01:53.615 [139/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.615 [140/265] Linking static target lib/librte_dmadev.a 00:01:53.615 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.615 [142/265] Linking static target lib/librte_compressdev.a 00:01:53.615 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.615 [144/265] Linking static target lib/librte_rcu.a 00:01:53.615 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.615 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.615 [147/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.615 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.615 [149/265] Linking static target lib/librte_mempool.a 00:01:53.615 [150/265] Linking target lib/librte_log.so.24.0 00:01:53.615 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.615 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.615 [153/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.615 [154/265] Linking static target lib/librte_net.a 00:01:53.615 [155/265] Linking static target lib/librte_power.a 00:01:53.615 [156/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.615 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.615 [158/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.615 [159/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.615 [160/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.873 [161/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.873 [162/265] Linking static target lib/librte_mbuf.a 00:01:53.873 [163/265] Linking static target lib/librte_reorder.a 00:01:53.873 [164/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.873 [165/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:53.873 [166/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.873 [167/265] Linking static target lib/librte_hash.a 00:01:53.873 [168/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.873 [169/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.873 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.873 [171/265] Linking static target lib/librte_security.a 00:01:53.873 [172/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.873 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.873 [174/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.873 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.873 [176/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.873 [177/265] Linking target lib/librte_kvargs.so.24.0 00:01:53.873 [178/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.873 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.873 [180/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.873 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.873 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.873 [183/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.873 [184/265] Linking static target lib/librte_cryptodev.a 00:01:53.873 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.873 [186/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.873 [187/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.873 [188/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.873 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.132 [190/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:54.132 [191/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.132 [192/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.132 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.132 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.132 [195/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.132 [196/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.132 [197/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.132 [198/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.132 [199/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.132 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.132 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.132 [202/265] Linking static target drivers/librte_bus_vdev.a 00:01:54.132 [203/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.132 [204/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.132 [205/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.132 [206/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.132 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.132 [208/265] Linking target lib/librte_telemetry.so.24.0 00:01:54.132 [209/265] Linking static target drivers/librte_mempool_ring.a 00:01:54.132 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:54.132 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.392 [212/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.392 [213/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.392 [214/265] Linking static target lib/librte_ethdev.a 00:01:54.392 [215/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:54.392 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.392 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.392 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.651 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.651 [220/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.651 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.651 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.910 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.910 [224/265] Linking static target lib/librte_vhost.a 00:01:54.910 [225/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.910 [226/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.323 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.892 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.425 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.425 [231/265] Linking target lib/librte_eal.so.24.0 00:02:06.425 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:06.425 [233/265] Linking target lib/librte_meter.so.24.0 00:02:06.425 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:06.425 [235/265] Linking target lib/librte_dmadev.so.24.0 00:02:06.425 [236/265] Linking target lib/librte_pci.so.24.0 00:02:06.425 [237/265] Linking target lib/librte_ring.so.24.0 00:02:06.425 [238/265] Linking target lib/librte_timer.so.24.0 00:02:06.425 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:06.425 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:06.425 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:06.425 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:06.425 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:06.685 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:06.685 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:06.685 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:06.685 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:06.685 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:06.685 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:06.685 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:06.943 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:06.943 [252/265] Linking target lib/librte_net.so.24.0 00:02:06.943 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:06.943 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:02:06.943 [255/265] Linking target lib/librte_reorder.so.24.0 00:02:07.202 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:07.202 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:07.202 [258/265] Linking target lib/librte_hash.so.24.0 00:02:07.202 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:07.202 [260/265] Linking target lib/librte_security.so.24.0 00:02:07.202 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:07.202 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:07.461 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:07.461 [264/265] Linking target lib/librte_power.so.24.0 00:02:07.461 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:07.461 INFO: autodetecting backend as ninja 00:02:07.461 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:08.396 CC lib/ut_mock/mock.o 00:02:08.396 CC lib/ut/ut.o 00:02:08.396 CC lib/log/log.o 00:02:08.396 CC lib/log/log_deprecated.o 00:02:08.396 CC lib/log/log_flags.o 00:02:08.396 LIB libspdk_ut_mock.a 00:02:08.396 LIB libspdk_ut.a 00:02:08.396 LIB libspdk_log.a 00:02:08.654 CXX lib/trace_parser/trace.o 00:02:08.912 CC lib/util/base64.o 00:02:08.912 CC lib/util/cpuset.o 00:02:08.912 CC lib/util/bit_array.o 00:02:08.912 CC lib/util/crc16.o 00:02:08.912 CC lib/util/crc32.o 00:02:08.912 CC lib/util/crc32c.o 00:02:08.912 CC lib/util/crc32_ieee.o 00:02:08.912 CC lib/util/crc64.o 00:02:08.912 CC lib/util/fd.o 00:02:08.912 CC lib/util/dif.o 00:02:08.912 CC lib/util/file.o 00:02:08.912 CC lib/dma/dma.o 00:02:08.912 CC lib/util/hexlify.o 00:02:08.912 CC lib/util/iov.o 00:02:08.912 CC lib/util/pipe.o 00:02:08.912 CC lib/util/math.o 00:02:08.912 CC lib/util/strerror_tls.o 00:02:08.912 CC lib/util/string.o 00:02:08.912 CC lib/util/uuid.o 00:02:08.912 CC lib/ioat/ioat.o 00:02:08.912 CC lib/util/fd_group.o 00:02:08.912 CC lib/util/xor.o 00:02:08.912 CC lib/util/zipf.o 00:02:08.912 CC lib/vfio_user/host/vfio_user.o 00:02:08.912 CC lib/vfio_user/host/vfio_user_pci.o 00:02:08.912 LIB libspdk_dma.a 00:02:08.912 LIB libspdk_ioat.a 00:02:09.169 LIB libspdk_util.a 00:02:09.169 LIB libspdk_vfio_user.a 00:02:09.169 LIB libspdk_trace_parser.a 00:02:09.427 CC lib/env_dpdk/env.o 00:02:09.427 CC lib/env_dpdk/memory.o 00:02:09.427 CC lib/env_dpdk/pci.o 00:02:09.427 CC lib/env_dpdk/init.o 00:02:09.427 CC lib/env_dpdk/threads.o 00:02:09.427 CC lib/env_dpdk/pci_idxd.o 00:02:09.427 CC lib/env_dpdk/pci_ioat.o 00:02:09.427 CC lib/env_dpdk/pci_virtio.o 00:02:09.427 CC lib/env_dpdk/pci_vmd.o 00:02:09.427 CC lib/env_dpdk/pci_event.o 00:02:09.427 CC lib/env_dpdk/sigbus_handler.o 00:02:09.427 CC lib/env_dpdk/pci_dpdk.o 00:02:09.427 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.427 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.427 CC lib/rdma/rdma_verbs.o 00:02:09.427 CC lib/rdma/common.o 00:02:09.427 CC lib/conf/conf.o 00:02:09.427 CC lib/idxd/idxd_user.o 00:02:09.427 CC lib/idxd/idxd.o 00:02:09.427 CC lib/vmd/vmd.o 00:02:09.427 CC lib/vmd/led.o 00:02:09.427 CC lib/json/json_parse.o 00:02:09.428 CC lib/json/json_util.o 00:02:09.428 CC lib/json/json_write.o 00:02:09.687 LIB libspdk_conf.a 00:02:09.687 LIB libspdk_rdma.a 00:02:09.687 LIB libspdk_json.a 00:02:09.687 LIB libspdk_idxd.a 00:02:09.687 LIB libspdk_vmd.a 00:02:09.945 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.945 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.945 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.945 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:10.204 LIB libspdk_jsonrpc.a 00:02:10.204 LIB libspdk_env_dpdk.a 00:02:10.463 CC lib/rpc/rpc.o 00:02:10.463 LIB libspdk_rpc.a 00:02:11.031 CC lib/trace/trace.o 00:02:11.031 CC lib/trace/trace_flags.o 00:02:11.031 CC lib/trace/trace_rpc.o 00:02:11.031 CC lib/keyring/keyring.o 00:02:11.031 CC lib/keyring/keyring_rpc.o 00:02:11.031 CC lib/notify/notify.o 00:02:11.031 CC lib/notify/notify_rpc.o 00:02:11.031 LIB libspdk_trace.a 00:02:11.031 LIB libspdk_notify.a 00:02:11.031 LIB libspdk_keyring.a 00:02:11.291 CC lib/thread/thread.o 00:02:11.291 CC lib/thread/iobuf.o 00:02:11.291 CC lib/sock/sock.o 00:02:11.291 CC lib/sock/sock_rpc.o 00:02:11.550 LIB libspdk_sock.a 00:02:11.810 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.810 CC lib/nvme/nvme_ctrlr.o 00:02:11.810 CC lib/nvme/nvme_fabric.o 00:02:11.810 CC lib/nvme/nvme_ns_cmd.o 00:02:11.810 CC lib/nvme/nvme_ns.o 00:02:11.810 CC lib/nvme/nvme_pcie_common.o 00:02:11.810 CC lib/nvme/nvme_pcie.o 00:02:11.810 CC lib/nvme/nvme_quirks.o 00:02:11.810 CC lib/nvme/nvme_qpair.o 00:02:11.810 CC lib/nvme/nvme.o 00:02:11.810 CC lib/nvme/nvme_discovery.o 00:02:11.810 CC lib/nvme/nvme_transport.o 00:02:11.810 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.810 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.810 CC lib/nvme/nvme_opal.o 00:02:11.810 CC lib/nvme/nvme_tcp.o 00:02:11.810 CC lib/nvme/nvme_io_msg.o 00:02:11.810 CC lib/nvme/nvme_poll_group.o 00:02:11.810 CC lib/nvme/nvme_zns.o 00:02:11.810 CC lib/nvme/nvme_stubs.o 00:02:11.810 CC lib/nvme/nvme_cuse.o 00:02:11.810 CC lib/nvme/nvme_auth.o 00:02:11.810 CC lib/nvme/nvme_vfio_user.o 00:02:11.810 CC lib/nvme/nvme_rdma.o 00:02:12.069 LIB libspdk_thread.a 00:02:12.328 CC lib/accel/accel.o 00:02:12.328 CC lib/accel/accel_sw.o 00:02:12.328 CC lib/accel/accel_rpc.o 00:02:12.328 CC lib/vfu_tgt/tgt_endpoint.o 00:02:12.328 CC lib/vfu_tgt/tgt_rpc.o 00:02:12.328 CC lib/virtio/virtio.o 00:02:12.328 CC lib/virtio/virtio_vfio_user.o 00:02:12.328 CC lib/virtio/virtio_vhost_user.o 00:02:12.328 CC lib/virtio/virtio_pci.o 00:02:12.328 CC lib/init/json_config.o 00:02:12.328 CC lib/init/subsystem.o 00:02:12.328 CC lib/init/rpc.o 00:02:12.328 CC lib/init/subsystem_rpc.o 00:02:12.328 CC lib/blob/blobstore.o 00:02:12.328 CC lib/blob/request.o 00:02:12.328 CC lib/blob/blob_bs_dev.o 00:02:12.328 CC lib/blob/zeroes.o 00:02:12.587 LIB libspdk_init.a 00:02:12.587 LIB libspdk_vfu_tgt.a 00:02:12.587 LIB libspdk_virtio.a 00:02:12.846 CC lib/event/reactor.o 00:02:12.846 CC lib/event/app.o 00:02:12.846 CC lib/event/log_rpc.o 00:02:12.846 CC lib/event/scheduler_static.o 00:02:12.846 CC lib/event/app_rpc.o 00:02:13.104 LIB libspdk_accel.a 00:02:13.104 LIB libspdk_event.a 00:02:13.104 LIB libspdk_nvme.a 00:02:13.363 CC lib/bdev/bdev.o 00:02:13.363 CC lib/bdev/bdev_rpc.o 00:02:13.363 CC lib/bdev/bdev_zone.o 00:02:13.363 CC lib/bdev/part.o 00:02:13.363 CC lib/bdev/scsi_nvme.o 00:02:14.300 LIB libspdk_blob.a 00:02:14.559 CC lib/blobfs/blobfs.o 00:02:14.559 CC lib/blobfs/tree.o 00:02:14.559 CC lib/lvol/lvol.o 00:02:14.819 LIB libspdk_lvol.a 00:02:15.078 LIB libspdk_blobfs.a 00:02:15.079 LIB libspdk_bdev.a 00:02:15.337 CC lib/ublk/ublk.o 00:02:15.337 CC lib/ublk/ublk_rpc.o 00:02:15.337 CC lib/scsi/lun.o 00:02:15.337 CC lib/scsi/dev.o 00:02:15.337 CC lib/scsi/port.o 00:02:15.337 CC lib/scsi/scsi.o 00:02:15.337 CC lib/scsi/scsi_pr.o 00:02:15.337 CC lib/scsi/scsi_rpc.o 00:02:15.337 CC lib/scsi/scsi_bdev.o 00:02:15.337 CC lib/scsi/task.o 00:02:15.337 CC lib/nbd/nbd.o 00:02:15.337 CC lib/nbd/nbd_rpc.o 00:02:15.338 CC lib/ftl/ftl_core.o 00:02:15.338 CC lib/ftl/ftl_debug.o 00:02:15.338 CC lib/ftl/ftl_init.o 00:02:15.338 CC lib/ftl/ftl_layout.o 00:02:15.338 CC lib/nvmf/ctrlr.o 00:02:15.338 CC lib/ftl/ftl_io.o 00:02:15.338 CC lib/ftl/ftl_sb.o 00:02:15.338 CC lib/nvmf/ctrlr_discovery.o 00:02:15.338 CC lib/ftl/ftl_l2p.o 00:02:15.338 CC lib/nvmf/ctrlr_bdev.o 00:02:15.338 CC lib/ftl/ftl_l2p_flat.o 00:02:15.338 CC lib/nvmf/subsystem.o 00:02:15.338 CC lib/ftl/ftl_nv_cache.o 00:02:15.338 CC lib/nvmf/nvmf.o 00:02:15.338 CC lib/ftl/ftl_band.o 00:02:15.338 CC lib/nvmf/nvmf_rpc.o 00:02:15.338 CC lib/nvmf/transport.o 00:02:15.338 CC lib/ftl/ftl_band_ops.o 00:02:15.338 CC lib/nvmf/tcp.o 00:02:15.338 CC lib/ftl/ftl_writer.o 00:02:15.338 CC lib/nvmf/stubs.o 00:02:15.338 CC lib/ftl/ftl_rq.o 00:02:15.338 CC lib/nvmf/vfio_user.o 00:02:15.338 CC lib/nvmf/mdns_server.o 00:02:15.338 CC lib/ftl/ftl_reloc.o 00:02:15.597 CC lib/ftl/ftl_l2p_cache.o 00:02:15.597 CC lib/nvmf/rdma.o 00:02:15.597 CC lib/ftl/ftl_p2l.o 00:02:15.597 CC lib/nvmf/auth.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.597 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.597 CC lib/ftl/utils/ftl_conf.o 00:02:15.597 CC lib/ftl/utils/ftl_md.o 00:02:15.597 CC lib/ftl/utils/ftl_mempool.o 00:02:15.597 CC lib/ftl/utils/ftl_property.o 00:02:15.597 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.597 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.597 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.597 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.597 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.597 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.597 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.597 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.597 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.597 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.597 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.597 CC lib/ftl/base/ftl_base_dev.o 00:02:15.597 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.597 CC lib/ftl/ftl_trace.o 00:02:15.857 LIB libspdk_nbd.a 00:02:15.857 LIB libspdk_scsi.a 00:02:15.857 LIB libspdk_ublk.a 00:02:16.116 LIB libspdk_ftl.a 00:02:16.116 CC lib/iscsi/conn.o 00:02:16.116 CC lib/iscsi/init_grp.o 00:02:16.116 CC lib/iscsi/iscsi.o 00:02:16.116 CC lib/iscsi/md5.o 00:02:16.116 CC lib/iscsi/param.o 00:02:16.116 CC lib/iscsi/portal_grp.o 00:02:16.116 CC lib/iscsi/tgt_node.o 00:02:16.116 CC lib/iscsi/iscsi_subsystem.o 00:02:16.116 CC lib/iscsi/iscsi_rpc.o 00:02:16.116 CC lib/iscsi/task.o 00:02:16.116 CC lib/vhost/vhost.o 00:02:16.116 CC lib/vhost/vhost_rpc.o 00:02:16.116 CC lib/vhost/rte_vhost_user.o 00:02:16.116 CC lib/vhost/vhost_scsi.o 00:02:16.116 CC lib/vhost/vhost_blk.o 00:02:16.683 LIB libspdk_nvmf.a 00:02:16.683 LIB libspdk_vhost.a 00:02:16.942 LIB libspdk_iscsi.a 00:02:17.202 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.202 CC module/vfu_device/vfu_virtio_blk.o 00:02:17.202 CC module/vfu_device/vfu_virtio.o 00:02:17.460 CC module/vfu_device/vfu_virtio_rpc.o 00:02:17.460 CC module/vfu_device/vfu_virtio_scsi.o 00:02:17.460 CC module/keyring/file/keyring.o 00:02:17.460 CC module/keyring/file/keyring_rpc.o 00:02:17.460 CC module/blob/bdev/blob_bdev.o 00:02:17.460 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.460 CC module/accel/error/accel_error.o 00:02:17.460 CC module/accel/error/accel_error_rpc.o 00:02:17.460 LIB libspdk_env_dpdk_rpc.a 00:02:17.460 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.460 CC module/accel/dsa/accel_dsa.o 00:02:17.460 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.460 CC module/sock/posix/posix.o 00:02:17.460 CC module/accel/iaa/accel_iaa.o 00:02:17.460 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.460 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.460 CC module/accel/ioat/accel_ioat.o 00:02:17.460 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.460 LIB libspdk_keyring_file.a 00:02:17.460 LIB libspdk_scheduler_gscheduler.a 00:02:17.460 LIB libspdk_accel_error.a 00:02:17.744 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.744 LIB libspdk_scheduler_dynamic.a 00:02:17.744 LIB libspdk_blob_bdev.a 00:02:17.744 LIB libspdk_accel_ioat.a 00:02:17.744 LIB libspdk_accel_iaa.a 00:02:17.744 LIB libspdk_accel_dsa.a 00:02:17.744 LIB libspdk_vfu_device.a 00:02:18.003 LIB libspdk_sock_posix.a 00:02:18.003 CC module/bdev/nvme/bdev_nvme.o 00:02:18.003 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.003 CC module/bdev/nvme/nvme_rpc.o 00:02:18.003 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.003 CC module/bdev/nvme/vbdev_opal.o 00:02:18.003 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.003 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.003 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.003 CC module/bdev/ftl/bdev_ftl.o 00:02:18.003 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.003 CC module/bdev/aio/bdev_aio.o 00:02:18.003 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.003 CC module/bdev/delay/vbdev_delay.o 00:02:18.003 CC module/bdev/raid/bdev_raid.o 00:02:18.003 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.003 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.003 CC module/bdev/raid/raid0.o 00:02:18.003 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.003 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.003 CC module/bdev/malloc/bdev_malloc.o 00:02:18.003 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.003 CC module/bdev/raid/concat.o 00:02:18.003 CC module/bdev/raid/raid1.o 00:02:18.003 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.003 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.003 CC module/bdev/split/vbdev_split.o 00:02:18.003 CC module/bdev/error/vbdev_error.o 00:02:18.003 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.003 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.003 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.003 CC module/bdev/gpt/gpt.o 00:02:18.003 CC module/bdev/null/bdev_null.o 00:02:18.003 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.003 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.003 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.003 CC module/bdev/null/bdev_null_rpc.o 00:02:18.003 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.003 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.003 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.003 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.003 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.003 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.262 LIB libspdk_bdev_split.a 00:02:18.262 LIB libspdk_blobfs_bdev.a 00:02:18.262 LIB libspdk_bdev_ftl.a 00:02:18.262 LIB libspdk_bdev_error.a 00:02:18.262 LIB libspdk_bdev_gpt.a 00:02:18.262 LIB libspdk_bdev_null.a 00:02:18.262 LIB libspdk_bdev_passthru.a 00:02:18.262 LIB libspdk_bdev_aio.a 00:02:18.262 LIB libspdk_bdev_iscsi.a 00:02:18.262 LIB libspdk_bdev_zone_block.a 00:02:18.262 LIB libspdk_bdev_delay.a 00:02:18.262 LIB libspdk_bdev_malloc.a 00:02:18.262 LIB libspdk_bdev_lvol.a 00:02:18.521 LIB libspdk_bdev_virtio.a 00:02:18.521 LIB libspdk_bdev_raid.a 00:02:19.464 LIB libspdk_bdev_nvme.a 00:02:19.820 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.820 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:19.820 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.820 CC module/event/subsystems/vmd/vmd.o 00:02:19.820 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.820 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.820 CC module/event/subsystems/keyring/keyring.o 00:02:19.820 CC module/event/subsystems/sock/sock.o 00:02:19.820 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.081 LIB libspdk_event_vhost_blk.a 00:02:20.081 LIB libspdk_event_scheduler.a 00:02:20.081 LIB libspdk_event_vmd.a 00:02:20.081 LIB libspdk_event_iobuf.a 00:02:20.081 LIB libspdk_event_keyring.a 00:02:20.081 LIB libspdk_event_sock.a 00:02:20.081 LIB libspdk_event_vfu_tgt.a 00:02:20.340 CC module/event/subsystems/accel/accel.o 00:02:20.340 LIB libspdk_event_accel.a 00:02:20.600 CC module/event/subsystems/bdev/bdev.o 00:02:20.859 LIB libspdk_event_bdev.a 00:02:21.118 CC module/event/subsystems/ublk/ublk.o 00:02:21.118 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.118 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.118 CC module/event/subsystems/nbd/nbd.o 00:02:21.118 CC module/event/subsystems/scsi/scsi.o 00:02:21.118 LIB libspdk_event_ublk.a 00:02:21.118 LIB libspdk_event_nbd.a 00:02:21.118 LIB libspdk_event_scsi.a 00:02:21.118 LIB libspdk_event_nvmf.a 00:02:21.688 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.688 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.688 LIB libspdk_event_vhost_scsi.a 00:02:21.688 LIB libspdk_event_iscsi.a 00:02:21.947 CC test/rpc_client/rpc_client_test.o 00:02:21.947 TEST_HEADER include/spdk/accel.h 00:02:21.947 TEST_HEADER include/spdk/accel_module.h 00:02:21.947 TEST_HEADER include/spdk/barrier.h 00:02:21.947 TEST_HEADER include/spdk/assert.h 00:02:21.948 TEST_HEADER include/spdk/bdev.h 00:02:21.948 TEST_HEADER include/spdk/base64.h 00:02:21.948 TEST_HEADER include/spdk/bdev_module.h 00:02:21.948 TEST_HEADER include/spdk/bdev_zone.h 00:02:21.948 TEST_HEADER include/spdk/bit_array.h 00:02:21.948 TEST_HEADER include/spdk/bit_pool.h 00:02:21.948 TEST_HEADER include/spdk/blob_bdev.h 00:02:21.948 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:21.948 TEST_HEADER include/spdk/blobfs.h 00:02:21.948 TEST_HEADER include/spdk/blob.h 00:02:21.948 CC app/spdk_top/spdk_top.o 00:02:21.948 TEST_HEADER include/spdk/config.h 00:02:21.948 TEST_HEADER include/spdk/conf.h 00:02:21.948 TEST_HEADER include/spdk/cpuset.h 00:02:21.948 TEST_HEADER include/spdk/crc16.h 00:02:21.948 TEST_HEADER include/spdk/crc32.h 00:02:21.948 CXX app/trace/trace.o 00:02:21.948 TEST_HEADER include/spdk/crc64.h 00:02:21.948 TEST_HEADER include/spdk/dif.h 00:02:21.948 TEST_HEADER include/spdk/dma.h 00:02:21.948 CC app/spdk_lspci/spdk_lspci.o 00:02:21.948 TEST_HEADER include/spdk/env_dpdk.h 00:02:21.948 TEST_HEADER include/spdk/endian.h 00:02:21.948 TEST_HEADER include/spdk/env.h 00:02:21.948 TEST_HEADER include/spdk/event.h 00:02:21.948 TEST_HEADER include/spdk/fd_group.h 00:02:21.948 TEST_HEADER include/spdk/fd.h 00:02:21.948 CC app/spdk_nvme_identify/identify.o 00:02:21.948 TEST_HEADER include/spdk/file.h 00:02:21.948 CC app/trace_record/trace_record.o 00:02:21.948 TEST_HEADER include/spdk/ftl.h 00:02:21.948 TEST_HEADER include/spdk/gpt_spec.h 00:02:21.948 TEST_HEADER include/spdk/hexlify.h 00:02:21.948 CC app/spdk_nvme_perf/perf.o 00:02:21.948 TEST_HEADER include/spdk/histogram_data.h 00:02:21.948 TEST_HEADER include/spdk/idxd.h 00:02:21.948 CC app/spdk_nvme_discover/discovery_aer.o 00:02:21.948 TEST_HEADER include/spdk/idxd_spec.h 00:02:21.948 TEST_HEADER include/spdk/init.h 00:02:21.948 TEST_HEADER include/spdk/ioat.h 00:02:21.948 TEST_HEADER include/spdk/ioat_spec.h 00:02:21.948 TEST_HEADER include/spdk/iscsi_spec.h 00:02:21.948 TEST_HEADER include/spdk/jsonrpc.h 00:02:21.948 TEST_HEADER include/spdk/json.h 00:02:21.948 TEST_HEADER include/spdk/keyring.h 00:02:21.948 TEST_HEADER include/spdk/keyring_module.h 00:02:21.948 TEST_HEADER include/spdk/likely.h 00:02:21.948 TEST_HEADER include/spdk/log.h 00:02:21.948 TEST_HEADER include/spdk/lvol.h 00:02:21.948 TEST_HEADER include/spdk/mmio.h 00:02:21.948 TEST_HEADER include/spdk/memory.h 00:02:21.948 TEST_HEADER include/spdk/nbd.h 00:02:21.948 TEST_HEADER include/spdk/notify.h 00:02:21.948 TEST_HEADER include/spdk/nvme.h 00:02:21.948 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:21.948 TEST_HEADER include/spdk/nvme_intel.h 00:02:21.948 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:21.948 TEST_HEADER include/spdk/nvme_spec.h 00:02:21.948 TEST_HEADER include/spdk/nvme_zns.h 00:02:21.948 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:21.948 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:21.948 TEST_HEADER include/spdk/nvmf.h 00:02:21.948 TEST_HEADER include/spdk/nvmf_spec.h 00:02:21.948 TEST_HEADER include/spdk/nvmf_transport.h 00:02:21.948 TEST_HEADER include/spdk/opal.h 00:02:21.948 TEST_HEADER include/spdk/opal_spec.h 00:02:21.948 TEST_HEADER include/spdk/pci_ids.h 00:02:21.948 TEST_HEADER include/spdk/pipe.h 00:02:21.948 TEST_HEADER include/spdk/queue.h 00:02:21.948 TEST_HEADER include/spdk/scheduler.h 00:02:21.948 TEST_HEADER include/spdk/rpc.h 00:02:21.948 TEST_HEADER include/spdk/reduce.h 00:02:21.948 TEST_HEADER include/spdk/scsi.h 00:02:21.948 TEST_HEADER include/spdk/scsi_spec.h 00:02:21.948 TEST_HEADER include/spdk/sock.h 00:02:21.948 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:21.948 TEST_HEADER include/spdk/stdinc.h 00:02:21.948 TEST_HEADER include/spdk/string.h 00:02:21.948 CC app/spdk_dd/spdk_dd.o 00:02:21.948 TEST_HEADER include/spdk/thread.h 00:02:21.948 CC app/spdk_tgt/spdk_tgt.o 00:02:21.948 TEST_HEADER include/spdk/trace.h 00:02:21.948 TEST_HEADER include/spdk/trace_parser.h 00:02:21.948 CC app/vhost/vhost.o 00:02:21.948 CC app/iscsi_tgt/iscsi_tgt.o 00:02:21.948 TEST_HEADER include/spdk/tree.h 00:02:21.948 TEST_HEADER include/spdk/ublk.h 00:02:21.948 TEST_HEADER include/spdk/util.h 00:02:21.948 TEST_HEADER include/spdk/uuid.h 00:02:21.948 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:21.948 TEST_HEADER include/spdk/version.h 00:02:21.948 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:21.948 TEST_HEADER include/spdk/vhost.h 00:02:21.948 TEST_HEADER include/spdk/vmd.h 00:02:21.948 TEST_HEADER include/spdk/xor.h 00:02:21.948 TEST_HEADER include/spdk/zipf.h 00:02:21.948 CXX test/cpp_headers/accel.o 00:02:21.948 CXX test/cpp_headers/accel_module.o 00:02:21.948 CXX test/cpp_headers/assert.o 00:02:21.948 CXX test/cpp_headers/barrier.o 00:02:22.208 CXX test/cpp_headers/base64.o 00:02:22.208 CXX test/cpp_headers/bdev_module.o 00:02:22.208 CXX test/cpp_headers/bdev_zone.o 00:02:22.208 CXX test/cpp_headers/bdev.o 00:02:22.208 CXX test/cpp_headers/bit_array.o 00:02:22.208 CXX test/cpp_headers/bit_pool.o 00:02:22.208 CXX test/cpp_headers/blob_bdev.o 00:02:22.208 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.208 CXX test/cpp_headers/blob.o 00:02:22.208 CXX test/cpp_headers/blobfs.o 00:02:22.208 CXX test/cpp_headers/conf.o 00:02:22.208 CXX test/cpp_headers/config.o 00:02:22.208 CXX test/cpp_headers/crc16.o 00:02:22.208 CXX test/cpp_headers/cpuset.o 00:02:22.208 CXX test/cpp_headers/crc32.o 00:02:22.208 CXX test/cpp_headers/crc64.o 00:02:22.208 CXX test/cpp_headers/dma.o 00:02:22.208 CC app/nvmf_tgt/nvmf_main.o 00:02:22.208 CXX test/cpp_headers/dif.o 00:02:22.208 CXX test/cpp_headers/endian.o 00:02:22.208 CXX test/cpp_headers/env_dpdk.o 00:02:22.208 CXX test/cpp_headers/env.o 00:02:22.208 CXX test/cpp_headers/event.o 00:02:22.208 CXX test/cpp_headers/fd_group.o 00:02:22.208 CC test/nvme/startup/startup.o 00:02:22.208 CXX test/cpp_headers/fd.o 00:02:22.208 CC test/event/reactor/reactor.o 00:02:22.208 CXX test/cpp_headers/file.o 00:02:22.208 CC test/nvme/aer/aer.o 00:02:22.208 CXX test/cpp_headers/ftl.o 00:02:22.208 CXX test/cpp_headers/gpt_spec.o 00:02:22.208 CXX test/cpp_headers/hexlify.o 00:02:22.208 CC test/app/histogram_perf/histogram_perf.o 00:02:22.208 CXX test/cpp_headers/histogram_data.o 00:02:22.208 CXX test/cpp_headers/idxd.o 00:02:22.208 CC test/event/event_perf/event_perf.o 00:02:22.208 CC test/nvme/connect_stress/connect_stress.o 00:02:22.208 CXX test/cpp_headers/init.o 00:02:22.208 CC test/app/stub/stub.o 00:02:22.208 CXX test/cpp_headers/idxd_spec.o 00:02:22.208 CC test/nvme/reset/reset.o 00:02:22.208 CC test/env/pci/pci_ut.o 00:02:22.208 CC test/event/reactor_perf/reactor_perf.o 00:02:22.208 CC test/env/vtophys/vtophys.o 00:02:22.208 CC test/nvme/simple_copy/simple_copy.o 00:02:22.208 CC test/nvme/err_injection/err_injection.o 00:02:22.208 CC test/app/jsoncat/jsoncat.o 00:02:22.208 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.208 CC test/nvme/reserve/reserve.o 00:02:22.208 CC test/thread/lock/spdk_lock.o 00:02:22.208 CC test/nvme/boot_partition/boot_partition.o 00:02:22.208 CC test/env/memory/memory_ut.o 00:02:22.208 CC test/thread/poller_perf/poller_perf.o 00:02:22.208 CC test/nvme/sgl/sgl.o 00:02:22.208 CC test/nvme/overhead/overhead.o 00:02:22.208 CC test/nvme/e2edp/nvme_dp.o 00:02:22.208 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.208 CC test/nvme/cuse/cuse.o 00:02:22.208 CC test/nvme/fused_ordering/fused_ordering.o 00:02:22.208 CC test/nvme/compliance/nvme_compliance.o 00:02:22.208 CC test/event/app_repeat/app_repeat.o 00:02:22.208 CC test/nvme/fdp/fdp.o 00:02:22.208 CC test/accel/dif/dif.o 00:02:22.208 CC examples/accel/perf/accel_perf.o 00:02:22.208 CC app/fio/nvme/fio_plugin.o 00:02:22.208 CC examples/nvme/reconnect/reconnect.o 00:02:22.208 CC examples/nvme/abort/abort.o 00:02:22.208 CC examples/ioat/verify/verify.o 00:02:22.208 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.208 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.208 CC examples/vmd/lsvmd/lsvmd.o 00:02:22.208 CXX test/cpp_headers/ioat.o 00:02:22.208 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.208 CC examples/idxd/perf/perf.o 00:02:22.208 CC examples/ioat/perf/perf.o 00:02:22.208 CC examples/nvme/hello_world/hello_world.o 00:02:22.208 CC examples/nvme/hotplug/hotplug.o 00:02:22.208 CC examples/util/zipf/zipf.o 00:02:22.208 CC test/dma/test_dma/test_dma.o 00:02:22.208 CC test/app/bdev_svc/bdev_svc.o 00:02:22.208 CC examples/nvme/arbitration/arbitration.o 00:02:22.208 CC examples/vmd/led/led.o 00:02:22.208 CC test/blobfs/mkfs/mkfs.o 00:02:22.208 LINK rpc_client_test 00:02:22.209 CC test/bdev/bdevio/bdevio.o 00:02:22.209 CC examples/sock/hello_world/hello_sock.o 00:02:22.209 CC test/event/scheduler/scheduler.o 00:02:22.209 CC examples/blob/cli/blobcli.o 00:02:22.209 CC examples/thread/thread/thread_ex.o 00:02:22.209 CC app/fio/bdev/fio_plugin.o 00:02:22.209 CC examples/blob/hello_world/hello_blob.o 00:02:22.209 LINK spdk_lspci 00:02:22.209 CC examples/nvmf/nvmf/nvmf.o 00:02:22.209 CC test/env/mem_callbacks/mem_callbacks.o 00:02:22.209 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.209 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.209 CC test/lvol/esnap/esnap.o 00:02:22.209 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.209 LINK spdk_nvme_discover 00:02:22.209 LINK vtophys 00:02:22.209 LINK jsoncat 00:02:22.209 LINK reactor 00:02:22.209 LINK histogram_perf 00:02:22.209 LINK interrupt_tgt 00:02:22.209 CXX test/cpp_headers/iscsi_spec.o 00:02:22.209 LINK reactor_perf 00:02:22.209 LINK event_perf 00:02:22.209 CXX test/cpp_headers/ioat_spec.o 00:02:22.209 CXX test/cpp_headers/json.o 00:02:22.209 LINK poller_perf 00:02:22.209 CXX test/cpp_headers/jsonrpc.o 00:02:22.469 CXX test/cpp_headers/keyring.o 00:02:22.469 LINK spdk_trace_record 00:02:22.469 CXX test/cpp_headers/keyring_module.o 00:02:22.469 CXX test/cpp_headers/likely.o 00:02:22.469 CXX test/cpp_headers/log.o 00:02:22.469 CXX test/cpp_headers/lvol.o 00:02:22.469 CXX test/cpp_headers/memory.o 00:02:22.469 CXX test/cpp_headers/mmio.o 00:02:22.469 CXX test/cpp_headers/nbd.o 00:02:22.469 LINK env_dpdk_post_init 00:02:22.469 CXX test/cpp_headers/notify.o 00:02:22.469 CXX test/cpp_headers/nvme.o 00:02:22.469 CXX test/cpp_headers/nvme_intel.o 00:02:22.469 CXX test/cpp_headers/nvme_ocssd.o 00:02:22.469 LINK lsvmd 00:02:22.469 LINK startup 00:02:22.469 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:22.469 LINK stub 00:02:22.470 CXX test/cpp_headers/nvme_spec.o 00:02:22.470 CXX test/cpp_headers/nvme_zns.o 00:02:22.470 LINK app_repeat 00:02:22.470 CXX test/cpp_headers/nvmf_cmd.o 00:02:22.470 LINK connect_stress 00:02:22.470 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:22.470 CXX test/cpp_headers/nvmf.o 00:02:22.470 CXX test/cpp_headers/nvmf_spec.o 00:02:22.470 LINK boot_partition 00:02:22.470 CXX test/cpp_headers/nvmf_transport.o 00:02:22.470 CXX test/cpp_headers/opal.o 00:02:22.470 LINK vhost 00:02:22.470 CXX test/cpp_headers/opal_spec.o 00:02:22.470 LINK err_injection 00:02:22.470 LINK nvmf_tgt 00:02:22.470 CXX test/cpp_headers/pci_ids.o 00:02:22.470 CXX test/cpp_headers/pipe.o 00:02:22.470 LINK zipf 00:02:22.470 LINK iscsi_tgt 00:02:22.470 CXX test/cpp_headers/queue.o 00:02:22.470 CXX test/cpp_headers/reduce.o 00:02:22.470 LINK doorbell_aers 00:02:22.470 LINK led 00:02:22.470 LINK spdk_tgt 00:02:22.470 CXX test/cpp_headers/rpc.o 00:02:22.470 LINK reserve 00:02:22.470 LINK fused_ordering 00:02:22.470 CXX test/cpp_headers/scheduler.o 00:02:22.470 CXX test/cpp_headers/scsi.o 00:02:22.470 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:22.470 struct spdk_nvme_fdp_ruhs ruhs; 00:02:22.470 ^ 00:02:22.470 LINK pmr_persistence 00:02:22.470 LINK bdev_svc 00:02:22.470 LINK simple_copy 00:02:22.470 LINK cmb_copy 00:02:22.470 LINK verify 00:02:22.470 LINK ioat_perf 00:02:22.470 CXX test/cpp_headers/scsi_spec.o 00:02:22.470 LINK reset 00:02:22.470 LINK aer 00:02:22.470 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:22.470 LINK mkfs 00:02:22.470 LINK hotplug 00:02:22.470 LINK hello_world 00:02:22.470 LINK sgl 00:02:22.470 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:22.470 LINK scheduler 00:02:22.470 LINK hello_sock 00:02:22.470 LINK overhead 00:02:22.470 LINK fdp 00:02:22.470 LINK hello_blob 00:02:22.470 LINK thread 00:02:22.470 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:22.470 LINK nvme_dp 00:02:22.470 LINK spdk_trace 00:02:22.470 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:22.470 LINK hello_bdev 00:02:22.470 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:22.470 CXX test/cpp_headers/sock.o 00:02:22.730 CXX test/cpp_headers/stdinc.o 00:02:22.730 CXX test/cpp_headers/string.o 00:02:22.730 CXX test/cpp_headers/thread.o 00:02:22.730 CXX test/cpp_headers/trace.o 00:02:22.730 CXX test/cpp_headers/trace_parser.o 00:02:22.730 CXX test/cpp_headers/tree.o 00:02:22.730 CXX test/cpp_headers/ublk.o 00:02:22.730 CXX test/cpp_headers/util.o 00:02:22.730 CXX test/cpp_headers/uuid.o 00:02:22.730 CXX test/cpp_headers/version.o 00:02:22.730 CXX test/cpp_headers/vfio_user_pci.o 00:02:22.730 CXX test/cpp_headers/vfio_user_spec.o 00:02:22.730 CXX test/cpp_headers/vhost.o 00:02:22.730 CXX test/cpp_headers/vmd.o 00:02:22.730 LINK nvmf 00:02:22.730 CXX test/cpp_headers/xor.o 00:02:22.730 LINK idxd_perf 00:02:22.730 CXX test/cpp_headers/zipf.o 00:02:22.730 LINK reconnect 00:02:22.730 LINK dif 00:02:22.730 LINK test_dma 00:02:22.730 LINK abort 00:02:22.730 LINK arbitration 00:02:22.730 LINK bdevio 00:02:22.730 LINK pci_ut 00:02:22.730 LINK spdk_dd 00:02:22.730 LINK nvme_manage 00:02:22.730 LINK nvme_compliance 00:02:22.730 LINK accel_perf 00:02:22.730 LINK nvme_fuzz 00:02:22.988 LINK blobcli 00:02:22.988 LINK mem_callbacks 00:02:22.988 LINK llvm_vfio_fuzz 00:02:22.988 1 warning generated. 00:02:22.988 LINK spdk_bdev 00:02:22.988 LINK spdk_nvme 00:02:22.988 LINK spdk_nvme_identify 00:02:23.246 LINK spdk_nvme_perf 00:02:23.246 LINK bdevperf 00:02:23.246 LINK vhost_fuzz 00:02:23.246 LINK memory_ut 00:02:23.246 LINK spdk_top 00:02:23.504 LINK cuse 00:02:23.504 LINK llvm_nvme_fuzz 00:02:23.763 LINK spdk_lock 00:02:24.022 LINK iscsi_fuzz 00:02:25.927 LINK esnap 00:02:26.186 00:02:26.186 real 0m41.717s 00:02:26.186 user 6m6.984s 00:02:26.186 sys 2m41.792s 00:02:26.186 05:27:16 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:26.445 05:27:16 make -- common/autotest_common.sh@10 -- $ set +x 00:02:26.445 ************************************ 00:02:26.445 END TEST make 00:02:26.445 ************************************ 00:02:26.445 05:27:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.445 05:27:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.445 05:27:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.445 05:27:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.445 05:27:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.445 05:27:16 -- pm/common@44 -- $ pid=3137178 00:02:26.445 05:27:16 -- pm/common@50 -- $ kill -TERM 3137178 00:02:26.445 05:27:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.445 05:27:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.445 05:27:16 -- pm/common@44 -- $ pid=3137180 00:02:26.445 05:27:16 -- pm/common@50 -- $ kill -TERM 3137180 00:02:26.445 05:27:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.445 05:27:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.446 05:27:16 -- pm/common@44 -- $ pid=3137182 00:02:26.446 05:27:16 -- pm/common@50 -- $ kill -TERM 3137182 00:02:26.446 05:27:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.446 05:27:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.446 05:27:16 -- pm/common@44 -- $ pid=3137205 00:02:26.446 05:27:16 -- pm/common@50 -- $ sudo -E kill -TERM 3137205 00:02:26.446 05:27:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.446 05:27:16 -- nvmf/common.sh@7 -- # uname -s 00:02:26.446 05:27:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.446 05:27:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.446 05:27:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.446 05:27:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.446 05:27:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.446 05:27:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.446 05:27:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.446 05:27:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.446 05:27:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.446 05:27:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.446 05:27:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:26.446 05:27:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:26.446 05:27:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.446 05:27:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.446 05:27:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:26.446 05:27:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.446 05:27:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:26.446 05:27:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.446 05:27:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.446 05:27:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.446 05:27:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.446 05:27:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.446 05:27:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.446 05:27:16 -- paths/export.sh@5 -- # export PATH 00:02:26.446 05:27:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.446 05:27:16 -- nvmf/common.sh@47 -- # : 0 00:02:26.446 05:27:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:26.446 05:27:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:26.446 05:27:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.446 05:27:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.446 05:27:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.446 05:27:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:26.446 05:27:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:26.446 05:27:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:26.446 05:27:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.446 05:27:16 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.446 05:27:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.446 05:27:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.446 05:27:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:26.446 05:27:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.446 05:27:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:26.446 05:27:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.446 05:27:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.446 05:27:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.446 05:27:16 -- spdk/autotest.sh@48 -- # udevadm_pid=3198698 00:02:26.446 05:27:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.446 05:27:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.446 05:27:16 -- pm/common@17 -- # local monitor 00:02:26.446 05:27:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.446 05:27:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.446 05:27:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.446 05:27:16 -- pm/common@21 -- # date +%s 00:02:26.446 05:27:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.446 05:27:16 -- pm/common@21 -- # date +%s 00:02:26.446 05:27:16 -- pm/common@25 -- # sleep 1 00:02:26.446 05:27:16 -- pm/common@21 -- # date +%s 00:02:26.446 05:27:16 -- pm/common@21 -- # date +%s 00:02:26.446 05:27:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715743636 00:02:26.705 05:27:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715743636 00:02:26.705 05:27:16 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715743636 00:02:26.706 05:27:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715743636 00:02:26.706 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715743636_collect-vmstat.pm.log 00:02:26.706 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715743636_collect-cpu-load.pm.log 00:02:26.706 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715743636_collect-cpu-temp.pm.log 00:02:26.706 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715743636_collect-bmc-pm.bmc.pm.log 00:02:27.643 05:27:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.643 05:27:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.643 05:27:17 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:27.643 05:27:17 -- common/autotest_common.sh@10 -- # set +x 00:02:27.643 05:27:17 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.643 05:27:17 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:27.643 05:27:17 -- common/autotest_common.sh@10 -- # set +x 00:02:27.643 05:27:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:27.644 05:27:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:27.644 05:27:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:27.644 05:27:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:27.644 05:27:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:27.644 05:27:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.644 05:27:17 -- common/autotest_common.sh@1452 -- # uname 00:02:27.644 05:27:17 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:27.644 05:27:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.644 05:27:17 -- common/autotest_common.sh@1472 -- # uname 00:02:27.644 05:27:17 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:27.644 05:27:17 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:27.644 05:27:17 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:27.644 05:27:17 -- spdk/autotest.sh@72 -- # hash lcov 00:02:27.644 05:27:17 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:27.644 05:27:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:27.644 05:27:17 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:27.644 05:27:17 -- common/autotest_common.sh@10 -- # set +x 00:02:27.644 05:27:17 -- spdk/autotest.sh@91 -- # rm -f 00:02:27.644 05:27:17 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.934 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.934 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:30.934 05:27:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:30.934 05:27:20 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:30.934 05:27:20 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:30.934 05:27:20 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:30.934 05:27:20 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:30.934 05:27:20 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:30.934 05:27:20 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:30.934 05:27:20 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:30.934 05:27:20 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:30.934 05:27:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:30.934 05:27:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:30.934 05:27:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:30.934 05:27:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:30.934 05:27:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:30.934 05:27:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:30.934 No valid GPT data, bailing 00:02:30.934 05:27:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:30.934 05:27:20 -- scripts/common.sh@391 -- # pt= 00:02:30.934 05:27:20 -- scripts/common.sh@392 -- # return 1 00:02:30.934 05:27:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:30.934 1+0 records in 00:02:30.934 1+0 records out 00:02:30.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539034 s, 195 MB/s 00:02:30.934 05:27:20 -- spdk/autotest.sh@118 -- # sync 00:02:30.934 05:27:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:30.934 05:27:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:30.934 05:27:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:39.058 05:27:27 -- spdk/autotest.sh@124 -- # uname -s 00:02:39.058 05:27:27 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:39.059 05:27:27 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:39.059 05:27:27 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:39.059 05:27:27 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:39.059 05:27:27 -- common/autotest_common.sh@10 -- # set +x 00:02:39.059 ************************************ 00:02:39.059 START TEST setup.sh 00:02:39.059 ************************************ 00:02:39.059 05:27:27 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:39.059 * Looking for test storage... 00:02:39.059 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:39.059 05:27:27 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:39.059 05:27:27 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:39.059 05:27:27 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:39.059 05:27:27 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:39.059 05:27:27 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:39.059 05:27:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:39.059 ************************************ 00:02:39.059 START TEST acl 00:02:39.059 ************************************ 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:39.059 * Looking for test storage... 00:02:39.059 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:39.059 05:27:27 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:39.059 05:27:27 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:39.059 05:27:27 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:39.059 05:27:27 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:39.059 05:27:27 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:39.059 05:27:27 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:39.059 05:27:27 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:39.059 05:27:27 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.059 05:27:27 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.598 05:27:31 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:41.598 05:27:31 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:41.598 05:27:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.598 05:27:31 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:41.598 05:27:31 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.598 05:27:31 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:44.890 Hugepages 00:02:44.890 node hugesize free / total 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 00:02:44.890 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:44.890 05:27:34 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:44.890 05:27:34 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:44.890 05:27:34 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:44.890 05:27:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.890 ************************************ 00:02:44.890 START TEST denied 00:02:44.890 ************************************ 00:02:44.890 05:27:34 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:44.890 05:27:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:44.890 05:27:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:44.890 05:27:34 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:44.890 05:27:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.891 05:27:34 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:48.184 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:48.184 05:27:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.474 00:02:52.474 real 0m7.791s 00:02:52.474 user 0m2.394s 00:02:52.474 sys 0m4.709s 00:02:52.474 05:27:42 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:52.474 05:27:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:52.474 ************************************ 00:02:52.474 END TEST denied 00:02:52.474 ************************************ 00:02:52.474 05:27:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:52.474 05:27:42 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:52.474 05:27:42 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:52.474 05:27:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:52.474 ************************************ 00:02:52.474 START TEST allowed 00:02:52.474 ************************************ 00:02:52.474 05:27:42 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:02:52.474 05:27:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:52.474 05:27:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:52.474 05:27:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:52.474 05:27:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.474 05:27:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:57.748 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.748 05:27:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:57.748 05:27:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:57.748 05:27:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:57.749 05:27:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.749 05:27:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.281 00:03:00.281 real 0m7.709s 00:03:00.281 user 0m1.927s 00:03:00.281 sys 0m4.215s 00:03:00.281 05:27:50 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:00.281 05:27:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:00.281 ************************************ 00:03:00.281 END TEST allowed 00:03:00.281 ************************************ 00:03:00.281 00:03:00.281 real 0m22.440s 00:03:00.281 user 0m6.672s 00:03:00.281 sys 0m13.694s 00:03:00.281 05:27:50 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:00.281 05:27:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:00.281 ************************************ 00:03:00.281 END TEST acl 00:03:00.281 ************************************ 00:03:00.281 05:27:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.281 05:27:50 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:00.281 05:27:50 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:00.281 05:27:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.542 ************************************ 00:03:00.542 START TEST hugepages 00:03:00.542 ************************************ 00:03:00.542 05:27:50 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.542 * Looking for test storage... 00:03:00.542 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40641524 kB' 'MemAvailable: 42284592 kB' 'Buffers: 4156 kB' 'Cached: 11057760 kB' 'SwapCached: 20048 kB' 'Active: 6756536 kB' 'Inactive: 4911456 kB' 'Active(anon): 6301956 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589528 kB' 'Mapped: 203460 kB' 'Shmem: 8922672 kB' 'KReclaimable: 301552 kB' 'Slab: 916440 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 614888 kB' 'KernelStack: 21952 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 10893400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216296 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.542 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.543 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:00.544 05:27:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:00.544 05:27:50 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:00.544 05:27:50 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:00.544 05:27:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.544 ************************************ 00:03:00.544 START TEST default_setup 00:03:00.544 ************************************ 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.544 05:27:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:03.831 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.831 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.211 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42883188 kB' 'MemAvailable: 44526256 kB' 'Buffers: 4156 kB' 'Cached: 11057884 kB' 'SwapCached: 20048 kB' 'Active: 6765552 kB' 'Inactive: 4911456 kB' 'Active(anon): 6310972 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597820 kB' 'Mapped: 203828 kB' 'Shmem: 8922796 kB' 'KReclaimable: 301552 kB' 'Slab: 914108 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612556 kB' 'KernelStack: 22128 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10902952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.211 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42885484 kB' 'MemAvailable: 44529056 kB' 'Buffers: 4156 kB' 'Cached: 11057888 kB' 'SwapCached: 20048 kB' 'Active: 6765060 kB' 'Inactive: 4911456 kB' 'Active(anon): 6310480 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597396 kB' 'Mapped: 203812 kB' 'Shmem: 8922800 kB' 'KReclaimable: 301552 kB' 'Slab: 914092 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612540 kB' 'KernelStack: 22064 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10902972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.212 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42888876 kB' 'MemAvailable: 44531944 kB' 'Buffers: 4156 kB' 'Cached: 11057904 kB' 'SwapCached: 20048 kB' 'Active: 6764280 kB' 'Inactive: 4911456 kB' 'Active(anon): 6309700 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597432 kB' 'Mapped: 203712 kB' 'Shmem: 8922816 kB' 'KReclaimable: 301552 kB' 'Slab: 914092 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612540 kB' 'KernelStack: 22000 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10904232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.213 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.214 nr_hugepages=1024 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.214 resv_hugepages=0 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.214 surplus_hugepages=0 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.214 anon_hugepages=0 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42888436 kB' 'MemAvailable: 44531504 kB' 'Buffers: 4156 kB' 'Cached: 11057928 kB' 'SwapCached: 20048 kB' 'Active: 6764600 kB' 'Inactive: 4911456 kB' 'Active(anon): 6310020 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597228 kB' 'Mapped: 203712 kB' 'Shmem: 8922840 kB' 'KReclaimable: 301552 kB' 'Slab: 914092 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612540 kB' 'KernelStack: 22096 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10904504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.214 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20837548 kB' 'MemUsed: 11801592 kB' 'SwapCached: 17412 kB' 'Active: 3996536 kB' 'Inactive: 3926016 kB' 'Active(anon): 3950064 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514424 kB' 'Mapped: 117172 kB' 'AnonPages: 411256 kB' 'Shmem: 6744360 kB' 'KernelStack: 12184 kB' 'PageTables: 5312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195540 kB' 'Slab: 522124 kB' 'SReclaimable: 195540 kB' 'SUnreclaim: 326584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.215 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:05.216 node0=1024 expecting 1024 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:05.216 00:03:05.216 real 0m4.554s 00:03:05.216 user 0m1.066s 00:03:05.216 sys 0m1.964s 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:05.216 05:27:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:05.216 ************************************ 00:03:05.216 END TEST default_setup 00:03:05.216 ************************************ 00:03:05.216 05:27:55 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:05.216 05:27:55 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:05.216 05:27:55 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:05.216 05:27:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.216 ************************************ 00:03:05.216 START TEST per_node_1G_alloc 00:03:05.216 ************************************ 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.216 05:27:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:08.508 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.508 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.508 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42933748 kB' 'MemAvailable: 44576816 kB' 'Buffers: 4156 kB' 'Cached: 11058032 kB' 'SwapCached: 20048 kB' 'Active: 6763120 kB' 'Inactive: 4911456 kB' 'Active(anon): 6308540 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595588 kB' 'Mapped: 202460 kB' 'Shmem: 8922944 kB' 'KReclaimable: 301552 kB' 'Slab: 914216 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612664 kB' 'KernelStack: 21984 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10891172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42934028 kB' 'MemAvailable: 44577096 kB' 'Buffers: 4156 kB' 'Cached: 11058036 kB' 'SwapCached: 20048 kB' 'Active: 6762588 kB' 'Inactive: 4911456 kB' 'Active(anon): 6308008 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595088 kB' 'Mapped: 202436 kB' 'Shmem: 8922948 kB' 'KReclaimable: 301552 kB' 'Slab: 914184 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612632 kB' 'KernelStack: 21952 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10891192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.511 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42933272 kB' 'MemAvailable: 44576340 kB' 'Buffers: 4156 kB' 'Cached: 11058036 kB' 'SwapCached: 20048 kB' 'Active: 6762628 kB' 'Inactive: 4911456 kB' 'Active(anon): 6308048 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595120 kB' 'Mapped: 202436 kB' 'Shmem: 8922948 kB' 'KReclaimable: 301552 kB' 'Slab: 914184 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612632 kB' 'KernelStack: 21968 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10891212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.512 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.513 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.778 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.779 nr_hugepages=1024 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.779 resv_hugepages=0 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.779 surplus_hugepages=0 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.779 anon_hugepages=0 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42932264 kB' 'MemAvailable: 44575332 kB' 'Buffers: 4156 kB' 'Cached: 11058076 kB' 'SwapCached: 20048 kB' 'Active: 6762628 kB' 'Inactive: 4911456 kB' 'Active(anon): 6308048 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595084 kB' 'Mapped: 202436 kB' 'Shmem: 8922988 kB' 'KReclaimable: 301552 kB' 'Slab: 914184 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612632 kB' 'KernelStack: 21952 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10891236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.779 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.780 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.781 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21906456 kB' 'MemUsed: 10732684 kB' 'SwapCached: 17412 kB' 'Active: 3997068 kB' 'Inactive: 3926016 kB' 'Active(anon): 3950596 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514428 kB' 'Mapped: 115916 kB' 'AnonPages: 411848 kB' 'Shmem: 6744364 kB' 'KernelStack: 12072 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195540 kB' 'Slab: 522388 kB' 'SReclaimable: 195540 kB' 'SUnreclaim: 326848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.782 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.783 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21026072 kB' 'MemUsed: 6630008 kB' 'SwapCached: 2636 kB' 'Active: 2765112 kB' 'Inactive: 985440 kB' 'Active(anon): 2357004 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 978484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3567896 kB' 'Mapped: 86520 kB' 'AnonPages: 182764 kB' 'Shmem: 2178668 kB' 'KernelStack: 9848 kB' 'PageTables: 3372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106012 kB' 'Slab: 391796 kB' 'SReclaimable: 106012 kB' 'SUnreclaim: 285784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.784 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:08.785 node0=512 expecting 512 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:08.785 node1=512 expecting 512 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:08.785 00:03:08.785 real 0m3.472s 00:03:08.785 user 0m1.309s 00:03:08.785 sys 0m2.203s 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:08.785 05:27:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:08.785 ************************************ 00:03:08.785 END TEST per_node_1G_alloc 00:03:08.785 ************************************ 00:03:08.785 05:27:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:08.785 05:27:58 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:08.785 05:27:58 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:08.785 05:27:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.785 ************************************ 00:03:08.785 START TEST even_2G_alloc 00:03:08.785 ************************************ 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.785 05:27:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:12.076 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.076 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42952528 kB' 'MemAvailable: 44595596 kB' 'Buffers: 4156 kB' 'Cached: 11058208 kB' 'SwapCached: 20048 kB' 'Active: 6763968 kB' 'Inactive: 4911456 kB' 'Active(anon): 6309388 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596280 kB' 'Mapped: 202468 kB' 'Shmem: 8923120 kB' 'KReclaimable: 301552 kB' 'Slab: 913880 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612328 kB' 'KernelStack: 21984 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10892172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216408 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.342 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42951736 kB' 'MemAvailable: 44594804 kB' 'Buffers: 4156 kB' 'Cached: 11058212 kB' 'SwapCached: 20048 kB' 'Active: 6763676 kB' 'Inactive: 4911456 kB' 'Active(anon): 6309096 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596040 kB' 'Mapped: 202448 kB' 'Shmem: 8923124 kB' 'KReclaimable: 301552 kB' 'Slab: 913908 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612356 kB' 'KernelStack: 21968 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10892192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216376 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.343 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.344 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42951736 kB' 'MemAvailable: 44594804 kB' 'Buffers: 4156 kB' 'Cached: 11058228 kB' 'SwapCached: 20048 kB' 'Active: 6763524 kB' 'Inactive: 4911456 kB' 'Active(anon): 6308944 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595848 kB' 'Mapped: 202448 kB' 'Shmem: 8923140 kB' 'KReclaimable: 301552 kB' 'Slab: 913908 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612356 kB' 'KernelStack: 21952 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10892212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216376 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.345 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.346 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.347 nr_hugepages=1024 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.347 resv_hugepages=0 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.347 surplus_hugepages=0 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.347 anon_hugepages=0 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42952412 kB' 'MemAvailable: 44595480 kB' 'Buffers: 4156 kB' 'Cached: 11058252 kB' 'SwapCached: 20048 kB' 'Active: 6763700 kB' 'Inactive: 4911456 kB' 'Active(anon): 6309120 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596032 kB' 'Mapped: 202448 kB' 'Shmem: 8923164 kB' 'KReclaimable: 301552 kB' 'Slab: 913908 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612356 kB' 'KernelStack: 21968 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10892236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216376 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.347 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.348 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21924724 kB' 'MemUsed: 10714416 kB' 'SwapCached: 17412 kB' 'Active: 3997112 kB' 'Inactive: 3926016 kB' 'Active(anon): 3950640 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514500 kB' 'Mapped: 115928 kB' 'AnonPages: 411876 kB' 'Shmem: 6744436 kB' 'KernelStack: 12120 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195540 kB' 'Slab: 522156 kB' 'SReclaimable: 195540 kB' 'SUnreclaim: 326616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21029128 kB' 'MemUsed: 6626952 kB' 'SwapCached: 2636 kB' 'Active: 2766592 kB' 'Inactive: 985440 kB' 'Active(anon): 2358484 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 978484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3567976 kB' 'Mapped: 86520 kB' 'AnonPages: 184152 kB' 'Shmem: 2178748 kB' 'KernelStack: 9848 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106012 kB' 'Slab: 391752 kB' 'SReclaimable: 106012 kB' 'SUnreclaim: 285740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.352 node0=512 expecting 512 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:12.352 node1=512 expecting 512 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.352 00:03:12.352 real 0m3.605s 00:03:12.352 user 0m1.367s 00:03:12.352 sys 0m2.294s 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:12.352 05:28:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:12.352 ************************************ 00:03:12.352 END TEST even_2G_alloc 00:03:12.352 ************************************ 00:03:12.611 05:28:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:12.611 05:28:02 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:12.611 05:28:02 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:12.611 05:28:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.611 ************************************ 00:03:12.611 START TEST odd_alloc 00:03:12.611 ************************************ 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.611 05:28:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:15.906 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.906 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.906 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42968132 kB' 'MemAvailable: 44611200 kB' 'Buffers: 4156 kB' 'Cached: 11058368 kB' 'SwapCached: 20048 kB' 'Active: 6764876 kB' 'Inactive: 4911456 kB' 'Active(anon): 6310296 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596620 kB' 'Mapped: 202392 kB' 'Shmem: 8923280 kB' 'KReclaimable: 301552 kB' 'Slab: 913664 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612112 kB' 'KernelStack: 22080 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10895152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.907 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42968636 kB' 'MemAvailable: 44611704 kB' 'Buffers: 4156 kB' 'Cached: 11058372 kB' 'SwapCached: 20048 kB' 'Active: 6764544 kB' 'Inactive: 4911456 kB' 'Active(anon): 6309964 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596752 kB' 'Mapped: 202444 kB' 'Shmem: 8923284 kB' 'KReclaimable: 301552 kB' 'Slab: 913644 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612092 kB' 'KernelStack: 22096 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10895172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.908 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.909 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42967708 kB' 'MemAvailable: 44610776 kB' 'Buffers: 4156 kB' 'Cached: 11058388 kB' 'SwapCached: 20048 kB' 'Active: 6764964 kB' 'Inactive: 4911456 kB' 'Active(anon): 6310384 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597084 kB' 'Mapped: 202444 kB' 'Shmem: 8923300 kB' 'KReclaimable: 301552 kB' 'Slab: 913700 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612148 kB' 'KernelStack: 22128 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10893704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.910 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.911 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:15.912 nr_hugepages=1025 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.912 resv_hugepages=0 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.912 surplus_hugepages=0 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.912 anon_hugepages=0 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42967776 kB' 'MemAvailable: 44610844 kB' 'Buffers: 4156 kB' 'Cached: 11058408 kB' 'SwapCached: 20048 kB' 'Active: 6764984 kB' 'Inactive: 4911456 kB' 'Active(anon): 6310404 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597108 kB' 'Mapped: 202444 kB' 'Shmem: 8923320 kB' 'KReclaimable: 301552 kB' 'Slab: 913700 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612148 kB' 'KernelStack: 22112 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10895212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.912 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.913 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21918068 kB' 'MemUsed: 10721072 kB' 'SwapCached: 17412 kB' 'Active: 3997084 kB' 'Inactive: 3926016 kB' 'Active(anon): 3950612 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514592 kB' 'Mapped: 115924 kB' 'AnonPages: 411716 kB' 'Shmem: 6744528 kB' 'KernelStack: 12056 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195540 kB' 'Slab: 522136 kB' 'SReclaimable: 195540 kB' 'SUnreclaim: 326596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.914 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.915 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 21049664 kB' 'MemUsed: 6606416 kB' 'SwapCached: 2636 kB' 'Active: 2768020 kB' 'Inactive: 985440 kB' 'Active(anon): 2359912 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 978484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3568020 kB' 'Mapped: 87024 kB' 'AnonPages: 185544 kB' 'Shmem: 2178792 kB' 'KernelStack: 10056 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106012 kB' 'Slab: 391564 kB' 'SReclaimable: 106012 kB' 'SUnreclaim: 285552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.916 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:15.917 node0=512 expecting 513 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:15.917 node1=513 expecting 512 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:15.917 00:03:15.917 real 0m3.209s 00:03:15.917 user 0m1.132s 00:03:15.917 sys 0m2.075s 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:15.917 05:28:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.917 ************************************ 00:03:15.917 END TEST odd_alloc 00:03:15.917 ************************************ 00:03:15.917 05:28:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:15.917 05:28:05 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:15.917 05:28:05 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:15.917 05:28:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.917 ************************************ 00:03:15.917 START TEST custom_alloc 00:03:15.917 ************************************ 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.917 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.918 05:28:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:19.214 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.214 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41938076 kB' 'MemAvailable: 43581144 kB' 'Buffers: 4156 kB' 'Cached: 11058536 kB' 'SwapCached: 20048 kB' 'Active: 6766416 kB' 'Inactive: 4911456 kB' 'Active(anon): 6311836 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598372 kB' 'Mapped: 202480 kB' 'Shmem: 8923448 kB' 'KReclaimable: 301552 kB' 'Slab: 913496 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 611944 kB' 'KernelStack: 22192 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10896148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216648 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.214 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.215 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41938972 kB' 'MemAvailable: 43582040 kB' 'Buffers: 4156 kB' 'Cached: 11058540 kB' 'SwapCached: 20048 kB' 'Active: 6765716 kB' 'Inactive: 4911456 kB' 'Active(anon): 6311136 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597636 kB' 'Mapped: 202484 kB' 'Shmem: 8923452 kB' 'KReclaimable: 301552 kB' 'Slab: 913552 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612000 kB' 'KernelStack: 22224 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10895920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.216 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.217 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.218 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41938660 kB' 'MemAvailable: 43581728 kB' 'Buffers: 4156 kB' 'Cached: 11058556 kB' 'SwapCached: 20048 kB' 'Active: 6765700 kB' 'Inactive: 4911456 kB' 'Active(anon): 6311120 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597688 kB' 'Mapped: 202476 kB' 'Shmem: 8923468 kB' 'KReclaimable: 301552 kB' 'Slab: 913552 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612000 kB' 'KernelStack: 22144 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10896188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216664 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.219 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.220 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.221 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:19.222 nr_hugepages=1536 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.222 resv_hugepages=0 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.222 surplus_hugepages=0 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.222 anon_hugepages=0 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41937868 kB' 'MemAvailable: 43580936 kB' 'Buffers: 4156 kB' 'Cached: 11058576 kB' 'SwapCached: 20048 kB' 'Active: 6766060 kB' 'Inactive: 4911456 kB' 'Active(anon): 6311480 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597972 kB' 'Mapped: 202476 kB' 'Shmem: 8923488 kB' 'KReclaimable: 301552 kB' 'Slab: 913552 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612000 kB' 'KernelStack: 22240 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10895960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216568 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.222 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.223 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21911408 kB' 'MemUsed: 10727732 kB' 'SwapCached: 17412 kB' 'Active: 3996884 kB' 'Inactive: 3926016 kB' 'Active(anon): 3950412 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514688 kB' 'Mapped: 115936 kB' 'AnonPages: 411356 kB' 'Shmem: 6744624 kB' 'KernelStack: 12136 kB' 'PageTables: 5112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195540 kB' 'Slab: 521972 kB' 'SReclaimable: 195540 kB' 'SUnreclaim: 326432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.224 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20027212 kB' 'MemUsed: 7628868 kB' 'SwapCached: 2636 kB' 'Active: 2768488 kB' 'Inactive: 985440 kB' 'Active(anon): 2360380 kB' 'Inactive(anon): 6956 kB' 'Active(file): 408108 kB' 'Inactive(file): 978484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3568136 kB' 'Mapped: 86520 kB' 'AnonPages: 185892 kB' 'Shmem: 2178908 kB' 'KernelStack: 9896 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106012 kB' 'Slab: 391508 kB' 'SReclaimable: 106012 kB' 'SUnreclaim: 285496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.226 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.227 node0=512 expecting 512 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:19.227 node1=1024 expecting 1024 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:19.227 00:03:19.227 real 0m3.383s 00:03:19.227 user 0m1.260s 00:03:19.227 sys 0m2.143s 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:19.227 05:28:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.227 ************************************ 00:03:19.227 END TEST custom_alloc 00:03:19.227 ************************************ 00:03:19.227 05:28:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:19.227 05:28:09 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:19.227 05:28:09 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:19.227 05:28:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.227 ************************************ 00:03:19.227 START TEST no_shrink_alloc 00:03:19.227 ************************************ 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.227 05:28:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:22.573 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.573 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.573 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43014940 kB' 'MemAvailable: 44658008 kB' 'Buffers: 4156 kB' 'Cached: 11058696 kB' 'SwapCached: 20048 kB' 'Active: 6767772 kB' 'Inactive: 4911456 kB' 'Active(anon): 6313192 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599504 kB' 'Mapped: 202464 kB' 'Shmem: 8923608 kB' 'KReclaimable: 301552 kB' 'Slab: 913688 kB' 'SReclaimable: 301552 kB' 'SUnreclaim: 612136 kB' 'KernelStack: 21952 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.838 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.839 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43015180 kB' 'MemAvailable: 44658216 kB' 'Buffers: 4156 kB' 'Cached: 11058696 kB' 'SwapCached: 20048 kB' 'Active: 6767472 kB' 'Inactive: 4911456 kB' 'Active(anon): 6312892 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599272 kB' 'Mapped: 202464 kB' 'Shmem: 8923608 kB' 'KReclaimable: 301488 kB' 'Slab: 913640 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 612152 kB' 'KernelStack: 21952 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.840 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.841 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43015112 kB' 'MemAvailable: 44658148 kB' 'Buffers: 4156 kB' 'Cached: 11058716 kB' 'SwapCached: 20048 kB' 'Active: 6767488 kB' 'Inactive: 4911456 kB' 'Active(anon): 6312908 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599272 kB' 'Mapped: 202464 kB' 'Shmem: 8923628 kB' 'KReclaimable: 301488 kB' 'Slab: 913640 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 612152 kB' 'KernelStack: 21952 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.842 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.843 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.844 nr_hugepages=1024 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.844 resv_hugepages=0 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.844 surplus_hugepages=0 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.844 anon_hugepages=0 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43039196 kB' 'MemAvailable: 44682232 kB' 'Buffers: 4156 kB' 'Cached: 11058756 kB' 'SwapCached: 20048 kB' 'Active: 6767164 kB' 'Inactive: 4911456 kB' 'Active(anon): 6312584 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598864 kB' 'Mapped: 202464 kB' 'Shmem: 8923668 kB' 'KReclaimable: 301488 kB' 'Slab: 913640 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 612152 kB' 'KernelStack: 21936 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.844 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.845 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20893436 kB' 'MemUsed: 11745704 kB' 'SwapCached: 17412 kB' 'Active: 3998996 kB' 'Inactive: 3926016 kB' 'Active(anon): 3952524 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514696 kB' 'Mapped: 115944 kB' 'AnonPages: 413448 kB' 'Shmem: 6744632 kB' 'KernelStack: 12040 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195476 kB' 'Slab: 522068 kB' 'SReclaimable: 195476 kB' 'SUnreclaim: 326592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.846 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.847 node0=1024 expecting 1024 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.847 05:28:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:26.143 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.143 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.143 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43071704 kB' 'MemAvailable: 44714740 kB' 'Buffers: 4156 kB' 'Cached: 11058840 kB' 'SwapCached: 20048 kB' 'Active: 6768576 kB' 'Inactive: 4911456 kB' 'Active(anon): 6313996 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600168 kB' 'Mapped: 202524 kB' 'Shmem: 8923752 kB' 'KReclaimable: 301488 kB' 'Slab: 912952 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 611464 kB' 'KernelStack: 21968 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216472 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.143 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.144 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43071416 kB' 'MemAvailable: 44714452 kB' 'Buffers: 4156 kB' 'Cached: 11058844 kB' 'SwapCached: 20048 kB' 'Active: 6768228 kB' 'Inactive: 4911456 kB' 'Active(anon): 6313648 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599824 kB' 'Mapped: 202516 kB' 'Shmem: 8923756 kB' 'KReclaimable: 301488 kB' 'Slab: 912936 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 611448 kB' 'KernelStack: 21936 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.145 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.146 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43071920 kB' 'MemAvailable: 44714956 kB' 'Buffers: 4156 kB' 'Cached: 11058844 kB' 'SwapCached: 20048 kB' 'Active: 6768228 kB' 'Inactive: 4911456 kB' 'Active(anon): 6313648 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599824 kB' 'Mapped: 202516 kB' 'Shmem: 8923756 kB' 'KReclaimable: 301488 kB' 'Slab: 912936 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 611448 kB' 'KernelStack: 21936 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.147 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.148 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.149 nr_hugepages=1024 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.149 resv_hugepages=0 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.149 surplus_hugepages=0 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.149 anon_hugepages=0 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43071252 kB' 'MemAvailable: 44714288 kB' 'Buffers: 4156 kB' 'Cached: 11058848 kB' 'SwapCached: 20048 kB' 'Active: 6768404 kB' 'Inactive: 4911456 kB' 'Active(anon): 6313824 kB' 'Inactive(anon): 3226792 kB' 'Active(file): 454580 kB' 'Inactive(file): 1684664 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8278268 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600024 kB' 'Mapped: 202516 kB' 'Shmem: 8923760 kB' 'KReclaimable: 301488 kB' 'Slab: 912936 kB' 'SReclaimable: 301488 kB' 'SUnreclaim: 611448 kB' 'KernelStack: 21952 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10894928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 87360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3317108 kB' 'DirectMap2M: 51943424 kB' 'DirectMap1G: 13631488 kB' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.149 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.150 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20909856 kB' 'MemUsed: 11729284 kB' 'SwapCached: 17412 kB' 'Active: 3998924 kB' 'Inactive: 3926016 kB' 'Active(anon): 3952452 kB' 'Inactive(anon): 3219836 kB' 'Active(file): 46472 kB' 'Inactive(file): 706180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7514700 kB' 'Mapped: 115996 kB' 'AnonPages: 413312 kB' 'Shmem: 6744636 kB' 'KernelStack: 12024 kB' 'PageTables: 4888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 195476 kB' 'Slab: 521712 kB' 'SReclaimable: 195476 kB' 'SUnreclaim: 326236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.151 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.152 node0=1024 expecting 1024 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.152 00:03:26.152 real 0m6.958s 00:03:26.152 user 0m2.500s 00:03:26.152 sys 0m4.486s 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:26.152 05:28:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.152 ************************************ 00:03:26.152 END TEST no_shrink_alloc 00:03:26.152 ************************************ 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.411 05:28:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.411 00:03:26.411 real 0m25.885s 00:03:26.411 user 0m8.898s 00:03:26.411 sys 0m15.632s 00:03:26.411 05:28:16 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:26.411 05:28:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.411 ************************************ 00:03:26.411 END TEST hugepages 00:03:26.411 ************************************ 00:03:26.411 05:28:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:26.411 05:28:16 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:26.411 05:28:16 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:26.411 05:28:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.411 ************************************ 00:03:26.411 START TEST driver 00:03:26.411 ************************************ 00:03:26.411 05:28:16 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:26.411 * Looking for test storage... 00:03:26.411 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:26.411 05:28:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:26.411 05:28:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.411 05:28:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.681 05:28:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:31.681 05:28:21 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:31.681 05:28:21 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:31.681 05:28:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:31.681 ************************************ 00:03:31.681 START TEST guess_driver 00:03:31.681 ************************************ 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:31.681 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:31.681 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.681 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.681 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:31.681 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:31.681 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:31.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:31.682 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:31.682 Looking for driver=vfio-pci 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.682 05:28:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.212 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.472 05:28:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.388 05:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.388 05:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.388 05:28:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.388 05:28:26 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.388 05:28:26 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:36.388 05:28:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.388 05:28:26 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.581 00:03:40.581 real 0m9.441s 00:03:40.581 user 0m2.275s 00:03:40.581 sys 0m4.628s 00:03:40.581 05:28:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:40.581 05:28:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.581 ************************************ 00:03:40.581 END TEST guess_driver 00:03:40.581 ************************************ 00:03:40.840 00:03:40.840 real 0m14.338s 00:03:40.840 user 0m3.596s 00:03:40.840 sys 0m7.447s 00:03:40.840 05:28:30 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:40.840 05:28:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.840 ************************************ 00:03:40.840 END TEST driver 00:03:40.840 ************************************ 00:03:40.840 05:28:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:40.840 05:28:30 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:40.840 05:28:30 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:40.840 05:28:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.840 ************************************ 00:03:40.840 START TEST devices 00:03:40.840 ************************************ 00:03:40.840 05:28:30 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:40.840 * Looking for test storage... 00:03:40.840 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:40.840 05:28:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:40.840 05:28:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:40.840 05:28:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.840 05:28:30 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.033 05:28:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:45.033 05:28:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:45.033 05:28:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:45.033 05:28:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:45.033 No valid GPT data, bailing 00:03:45.033 05:28:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:45.034 05:28:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:45.034 05:28:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:45.034 05:28:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:45.034 05:28:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:45.034 05:28:34 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:45.034 05:28:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:45.034 05:28:34 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:45.034 05:28:34 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:45.034 05:28:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:45.034 ************************************ 00:03:45.034 START TEST nvme_mount 00:03:45.034 ************************************ 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:45.034 05:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:45.603 Creating new GPT entries in memory. 00:03:45.603 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.603 other utilities. 00:03:45.603 05:28:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.603 05:28:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.603 05:28:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.603 05:28:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.603 05:28:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.981 Creating new GPT entries in memory. 00:03:46.981 The operation has completed successfully. 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3228273 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.981 05:28:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.518 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.777 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.777 05:28:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.037 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.037 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.037 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.037 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.037 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:50.037 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:50.037 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.296 05:28:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.686 05:28:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.975 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.976 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.976 00:03:56.976 real 0m12.062s 00:03:56.976 user 0m3.336s 00:03:56.976 sys 0m6.558s 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:56.976 05:28:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:56.976 ************************************ 00:03:56.976 END TEST nvme_mount 00:03:56.976 ************************************ 00:03:56.976 05:28:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:56.976 05:28:46 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:56.976 05:28:46 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:56.976 05:28:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.976 ************************************ 00:03:56.976 START TEST dm_mount 00:03:56.976 ************************************ 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.976 05:28:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:57.914 Creating new GPT entries in memory. 00:03:57.914 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.914 other utilities. 00:03:57.914 05:28:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.914 05:28:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.914 05:28:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.914 05:28:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.914 05:28:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.853 Creating new GPT entries in memory. 00:03:58.853 The operation has completed successfully. 00:03:58.853 05:28:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.853 05:28:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.853 05:28:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.853 05:28:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.853 05:28:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:59.792 The operation has completed successfully. 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3232700 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:59.792 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.051 05:28:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:03.336 05:28:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.336 05:28:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:05.867 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.867 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.867 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.868 05:28:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.127 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.127 00:04:06.127 real 0m9.416s 00:04:06.127 user 0m2.213s 00:04:06.127 sys 0m4.245s 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:06.127 05:28:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:06.127 ************************************ 00:04:06.127 END TEST dm_mount 00:04:06.127 ************************************ 00:04:06.127 05:28:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:06.127 05:28:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.127 05:28:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.127 05:28:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.127 05:28:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.385 05:28:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.385 05:28:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.644 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.644 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.644 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.644 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.644 05:28:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.644 00:04:06.644 real 0m25.725s 00:04:06.644 user 0m6.917s 00:04:06.644 sys 0m13.512s 00:04:06.644 05:28:56 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:06.644 05:28:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.644 ************************************ 00:04:06.644 END TEST devices 00:04:06.644 ************************************ 00:04:06.644 00:04:06.644 real 1m28.875s 00:04:06.644 user 0m26.251s 00:04:06.644 sys 0m50.618s 00:04:06.644 05:28:56 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:06.644 05:28:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.644 ************************************ 00:04:06.644 END TEST setup.sh 00:04:06.644 ************************************ 00:04:06.644 05:28:56 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:09.933 Hugepages 00:04:09.933 node hugesize free / total 00:04:09.933 node0 1048576kB 0 / 0 00:04:09.933 node0 2048kB 2048 / 2048 00:04:09.933 node1 1048576kB 0 / 0 00:04:09.933 node1 2048kB 0 / 0 00:04:09.933 00:04:09.933 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.933 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:09.933 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:09.933 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:09.933 05:28:59 -- spdk/autotest.sh@130 -- # uname -s 00:04:09.933 05:28:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:09.933 05:28:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:09.933 05:28:59 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:13.225 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.225 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.485 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.485 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.485 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.485 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.863 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.122 05:29:04 -- common/autotest_common.sh@1529 -- # sleep 1 00:04:16.060 05:29:05 -- common/autotest_common.sh@1530 -- # bdfs=() 00:04:16.060 05:29:05 -- common/autotest_common.sh@1530 -- # local bdfs 00:04:16.060 05:29:05 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.060 05:29:05 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:04:16.060 05:29:05 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:16.060 05:29:05 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:16.060 05:29:05 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.060 05:29:05 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.060 05:29:05 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:16.060 05:29:06 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:16.060 05:29:06 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:16.060 05:29:06 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.352 Waiting for block devices as requested 00:04:19.352 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.352 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.352 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.352 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.611 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.611 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.611 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.611 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.870 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.870 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.870 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:20.129 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:20.129 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:20.129 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:20.448 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:20.448 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:20.448 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:20.708 05:29:10 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:20.708 05:29:10 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1499 -- # grep 0000:d8:00.0/nvme/nvme 00:04:20.708 05:29:10 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:20.708 05:29:10 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:20.708 05:29:10 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:20.708 05:29:10 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:20.708 05:29:10 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:20.708 05:29:10 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:04:20.708 05:29:10 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:20.708 05:29:10 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:20.709 05:29:10 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:20.709 05:29:10 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:20.709 05:29:10 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:20.709 05:29:10 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:20.709 05:29:10 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:20.709 05:29:10 -- common/autotest_common.sh@1554 -- # continue 00:04:20.709 05:29:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:20.709 05:29:10 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:20.709 05:29:10 -- common/autotest_common.sh@10 -- # set +x 00:04:20.709 05:29:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:20.709 05:29:10 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:20.709 05:29:10 -- common/autotest_common.sh@10 -- # set +x 00:04:20.709 05:29:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:24.009 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.009 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.917 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.917 05:29:15 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:25.917 05:29:15 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:25.917 05:29:15 -- common/autotest_common.sh@10 -- # set +x 00:04:25.917 05:29:15 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:25.917 05:29:15 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:25.917 05:29:15 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.917 05:29:15 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:25.917 05:29:15 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:25.917 05:29:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:25.917 05:29:15 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:25.917 05:29:15 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:25.917 05:29:15 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.917 05:29:15 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.917 05:29:15 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:25.917 05:29:15 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:25.917 05:29:15 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:25.917 05:29:15 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:25.917 05:29:15 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:25.917 05:29:15 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:04:25.917 05:29:15 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:25.917 05:29:15 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:04:25.917 05:29:15 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:d8:00.0 00:04:25.917 05:29:15 -- common/autotest_common.sh@1589 -- # [[ -z 0000:d8:00.0 ]] 00:04:25.917 05:29:15 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=3242222 00:04:25.917 05:29:15 -- common/autotest_common.sh@1595 -- # waitforlisten 3242222 00:04:25.917 05:29:15 -- common/autotest_common.sh@828 -- # '[' -z 3242222 ']' 00:04:25.917 05:29:15 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.917 05:29:15 -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:25.917 05:29:15 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.917 05:29:15 -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:25.917 05:29:15 -- common/autotest_common.sh@10 -- # set +x 00:04:25.917 05:29:15 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.917 [2024-05-15 05:29:15.784717] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:25.917 [2024-05-15 05:29:15.784807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242222 ] 00:04:25.917 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.917 [2024-05-15 05:29:15.854667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.917 [2024-05-15 05:29:15.932417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.856 05:29:16 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:26.856 05:29:16 -- common/autotest_common.sh@861 -- # return 0 00:04:26.856 05:29:16 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:04:26.856 05:29:16 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:04:26.856 05:29:16 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:30.146 nvme0n1 00:04:30.146 05:29:19 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:30.146 [2024-05-15 05:29:19.730148] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:30.146 request: 00:04:30.146 { 00:04:30.146 "nvme_ctrlr_name": "nvme0", 00:04:30.146 "password": "test", 00:04:30.146 "method": "bdev_nvme_opal_revert", 00:04:30.146 "req_id": 1 00:04:30.146 } 00:04:30.146 Got JSON-RPC error response 00:04:30.146 response: 00:04:30.146 { 00:04:30.146 "code": -32602, 00:04:30.146 "message": "Invalid parameters" 00:04:30.146 } 00:04:30.146 05:29:19 -- common/autotest_common.sh@1601 -- # true 00:04:30.146 05:29:19 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:04:30.146 05:29:19 -- common/autotest_common.sh@1605 -- # killprocess 3242222 00:04:30.146 05:29:19 -- common/autotest_common.sh@947 -- # '[' -z 3242222 ']' 00:04:30.146 05:29:19 -- common/autotest_common.sh@951 -- # kill -0 3242222 00:04:30.146 05:29:19 -- common/autotest_common.sh@952 -- # uname 00:04:30.146 05:29:19 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:30.146 05:29:19 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3242222 00:04:30.146 05:29:19 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:30.146 05:29:19 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:30.146 05:29:19 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3242222' 00:04:30.146 killing process with pid 3242222 00:04:30.146 05:29:19 -- common/autotest_common.sh@966 -- # kill 3242222 00:04:30.146 05:29:19 -- common/autotest_common.sh@971 -- # wait 3242222 00:04:32.050 05:29:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:32.050 05:29:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:32.050 05:29:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.050 05:29:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.050 05:29:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:32.050 05:29:21 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:32.050 05:29:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.050 05:29:21 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:32.050 05:29:21 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.050 05:29:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.050 05:29:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.050 ************************************ 00:04:32.050 START TEST env 00:04:32.050 ************************************ 00:04:32.050 05:29:21 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:32.050 * Looking for test storage... 00:04:32.050 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:32.050 05:29:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.050 05:29:22 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.050 05:29:22 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.050 05:29:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.309 ************************************ 00:04:32.309 START TEST env_memory 00:04:32.309 ************************************ 00:04:32.309 05:29:22 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.309 00:04:32.309 00:04:32.309 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.309 http://cunit.sourceforge.net/ 00:04:32.309 00:04:32.309 00:04:32.309 Suite: memory 00:04:32.309 Test: alloc and free memory map ...[2024-05-15 05:29:22.140310] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.309 passed 00:04:32.309 Test: mem map translation ...[2024-05-15 05:29:22.152990] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.309 [2024-05-15 05:29:22.153007] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.309 [2024-05-15 05:29:22.153036] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.309 [2024-05-15 05:29:22.153046] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.309 passed 00:04:32.309 Test: mem map registration ...[2024-05-15 05:29:22.173387] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:32.309 [2024-05-15 05:29:22.173404] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:32.309 passed 00:04:32.309 Test: mem map adjacent registrations ...passed 00:04:32.309 00:04:32.309 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.309 suites 1 1 n/a 0 0 00:04:32.309 tests 4 4 4 0 0 00:04:32.309 asserts 152 152 152 0 n/a 00:04:32.309 00:04:32.309 Elapsed time = 0.083 seconds 00:04:32.309 00:04:32.309 real 0m0.095s 00:04:32.309 user 0m0.086s 00:04:32.309 sys 0m0.009s 00:04:32.309 05:29:22 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:32.309 05:29:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.309 ************************************ 00:04:32.309 END TEST env_memory 00:04:32.309 ************************************ 00:04:32.309 05:29:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.309 05:29:22 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:32.309 05:29:22 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:32.309 05:29:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.309 ************************************ 00:04:32.309 START TEST env_vtophys 00:04:32.309 ************************************ 00:04:32.309 05:29:22 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.309 EAL: lib.eal log level changed from notice to debug 00:04:32.309 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.309 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.309 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.309 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.309 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.309 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.309 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.309 EAL: Detected lcore 7 as core 8 on socket 0 00:04:32.309 EAL: Detected lcore 8 as core 9 on socket 0 00:04:32.309 EAL: Detected lcore 9 as core 10 on socket 0 00:04:32.309 EAL: Detected lcore 10 as core 11 on socket 0 00:04:32.309 EAL: Detected lcore 11 as core 12 on socket 0 00:04:32.309 EAL: Detected lcore 12 as core 13 on socket 0 00:04:32.309 EAL: Detected lcore 13 as core 14 on socket 0 00:04:32.309 EAL: Detected lcore 14 as core 16 on socket 0 00:04:32.309 EAL: Detected lcore 15 as core 17 on socket 0 00:04:32.309 EAL: Detected lcore 16 as core 18 on socket 0 00:04:32.309 EAL: Detected lcore 17 as core 19 on socket 0 00:04:32.309 EAL: Detected lcore 18 as core 20 on socket 0 00:04:32.309 EAL: Detected lcore 19 as core 21 on socket 0 00:04:32.309 EAL: Detected lcore 20 as core 22 on socket 0 00:04:32.309 EAL: Detected lcore 21 as core 24 on socket 0 00:04:32.309 EAL: Detected lcore 22 as core 25 on socket 0 00:04:32.309 EAL: Detected lcore 23 as core 26 on socket 0 00:04:32.309 EAL: Detected lcore 24 as core 27 on socket 0 00:04:32.309 EAL: Detected lcore 25 as core 28 on socket 0 00:04:32.309 EAL: Detected lcore 26 as core 29 on socket 0 00:04:32.309 EAL: Detected lcore 27 as core 30 on socket 0 00:04:32.309 EAL: Detected lcore 28 as core 0 on socket 1 00:04:32.309 EAL: Detected lcore 29 as core 1 on socket 1 00:04:32.309 EAL: Detected lcore 30 as core 2 on socket 1 00:04:32.309 EAL: Detected lcore 31 as core 3 on socket 1 00:04:32.309 EAL: Detected lcore 32 as core 4 on socket 1 00:04:32.309 EAL: Detected lcore 33 as core 5 on socket 1 00:04:32.309 EAL: Detected lcore 34 as core 6 on socket 1 00:04:32.309 EAL: Detected lcore 35 as core 8 on socket 1 00:04:32.309 EAL: Detected lcore 36 as core 9 on socket 1 00:04:32.309 EAL: Detected lcore 37 as core 10 on socket 1 00:04:32.309 EAL: Detected lcore 38 as core 11 on socket 1 00:04:32.309 EAL: Detected lcore 39 as core 12 on socket 1 00:04:32.309 EAL: Detected lcore 40 as core 13 on socket 1 00:04:32.309 EAL: Detected lcore 41 as core 14 on socket 1 00:04:32.309 EAL: Detected lcore 42 as core 16 on socket 1 00:04:32.309 EAL: Detected lcore 43 as core 17 on socket 1 00:04:32.309 EAL: Detected lcore 44 as core 18 on socket 1 00:04:32.309 EAL: Detected lcore 45 as core 19 on socket 1 00:04:32.309 EAL: Detected lcore 46 as core 20 on socket 1 00:04:32.309 EAL: Detected lcore 47 as core 21 on socket 1 00:04:32.309 EAL: Detected lcore 48 as core 22 on socket 1 00:04:32.309 EAL: Detected lcore 49 as core 24 on socket 1 00:04:32.309 EAL: Detected lcore 50 as core 25 on socket 1 00:04:32.309 EAL: Detected lcore 51 as core 26 on socket 1 00:04:32.309 EAL: Detected lcore 52 as core 27 on socket 1 00:04:32.309 EAL: Detected lcore 53 as core 28 on socket 1 00:04:32.309 EAL: Detected lcore 54 as core 29 on socket 1 00:04:32.309 EAL: Detected lcore 55 as core 30 on socket 1 00:04:32.309 EAL: Detected lcore 56 as core 0 on socket 0 00:04:32.309 EAL: Detected lcore 57 as core 1 on socket 0 00:04:32.309 EAL: Detected lcore 58 as core 2 on socket 0 00:04:32.309 EAL: Detected lcore 59 as core 3 on socket 0 00:04:32.309 EAL: Detected lcore 60 as core 4 on socket 0 00:04:32.309 EAL: Detected lcore 61 as core 5 on socket 0 00:04:32.309 EAL: Detected lcore 62 as core 6 on socket 0 00:04:32.309 EAL: Detected lcore 63 as core 8 on socket 0 00:04:32.309 EAL: Detected lcore 64 as core 9 on socket 0 00:04:32.309 EAL: Detected lcore 65 as core 10 on socket 0 00:04:32.309 EAL: Detected lcore 66 as core 11 on socket 0 00:04:32.309 EAL: Detected lcore 67 as core 12 on socket 0 00:04:32.309 EAL: Detected lcore 68 as core 13 on socket 0 00:04:32.309 EAL: Detected lcore 69 as core 14 on socket 0 00:04:32.309 EAL: Detected lcore 70 as core 16 on socket 0 00:04:32.309 EAL: Detected lcore 71 as core 17 on socket 0 00:04:32.309 EAL: Detected lcore 72 as core 18 on socket 0 00:04:32.309 EAL: Detected lcore 73 as core 19 on socket 0 00:04:32.309 EAL: Detected lcore 74 as core 20 on socket 0 00:04:32.309 EAL: Detected lcore 75 as core 21 on socket 0 00:04:32.309 EAL: Detected lcore 76 as core 22 on socket 0 00:04:32.309 EAL: Detected lcore 77 as core 24 on socket 0 00:04:32.310 EAL: Detected lcore 78 as core 25 on socket 0 00:04:32.310 EAL: Detected lcore 79 as core 26 on socket 0 00:04:32.310 EAL: Detected lcore 80 as core 27 on socket 0 00:04:32.310 EAL: Detected lcore 81 as core 28 on socket 0 00:04:32.310 EAL: Detected lcore 82 as core 29 on socket 0 00:04:32.310 EAL: Detected lcore 83 as core 30 on socket 0 00:04:32.310 EAL: Detected lcore 84 as core 0 on socket 1 00:04:32.310 EAL: Detected lcore 85 as core 1 on socket 1 00:04:32.310 EAL: Detected lcore 86 as core 2 on socket 1 00:04:32.310 EAL: Detected lcore 87 as core 3 on socket 1 00:04:32.310 EAL: Detected lcore 88 as core 4 on socket 1 00:04:32.310 EAL: Detected lcore 89 as core 5 on socket 1 00:04:32.310 EAL: Detected lcore 90 as core 6 on socket 1 00:04:32.310 EAL: Detected lcore 91 as core 8 on socket 1 00:04:32.310 EAL: Detected lcore 92 as core 9 on socket 1 00:04:32.310 EAL: Detected lcore 93 as core 10 on socket 1 00:04:32.310 EAL: Detected lcore 94 as core 11 on socket 1 00:04:32.310 EAL: Detected lcore 95 as core 12 on socket 1 00:04:32.310 EAL: Detected lcore 96 as core 13 on socket 1 00:04:32.310 EAL: Detected lcore 97 as core 14 on socket 1 00:04:32.310 EAL: Detected lcore 98 as core 16 on socket 1 00:04:32.310 EAL: Detected lcore 99 as core 17 on socket 1 00:04:32.310 EAL: Detected lcore 100 as core 18 on socket 1 00:04:32.310 EAL: Detected lcore 101 as core 19 on socket 1 00:04:32.310 EAL: Detected lcore 102 as core 20 on socket 1 00:04:32.310 EAL: Detected lcore 103 as core 21 on socket 1 00:04:32.310 EAL: Detected lcore 104 as core 22 on socket 1 00:04:32.310 EAL: Detected lcore 105 as core 24 on socket 1 00:04:32.310 EAL: Detected lcore 106 as core 25 on socket 1 00:04:32.310 EAL: Detected lcore 107 as core 26 on socket 1 00:04:32.310 EAL: Detected lcore 108 as core 27 on socket 1 00:04:32.310 EAL: Detected lcore 109 as core 28 on socket 1 00:04:32.310 EAL: Detected lcore 110 as core 29 on socket 1 00:04:32.310 EAL: Detected lcore 111 as core 30 on socket 1 00:04:32.310 EAL: Maximum logical cores by configuration: 128 00:04:32.310 EAL: Detected CPU lcores: 112 00:04:32.310 EAL: Detected NUMA nodes: 2 00:04:32.310 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:32.310 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:32.310 EAL: Checking presence of .so 'librte_eal.so' 00:04:32.310 EAL: Detected static linkage of DPDK 00:04:32.310 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.569 EAL: Bus pci wants IOVA as 'DC' 00:04:32.569 EAL: Buses did not request a specific IOVA mode. 00:04:32.569 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.569 EAL: Selected IOVA mode 'VA' 00:04:32.569 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.569 EAL: Probing VFIO support... 00:04:32.569 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.569 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.569 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.569 EAL: VFIO support initialized 00:04:32.569 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.569 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.569 EAL: Setting up physically contiguous memory... 00:04:32.569 EAL: Setting maximum number of open files to 524288 00:04:32.569 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.569 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.569 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.569 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.569 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.569 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.569 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.569 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.569 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.569 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.569 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.569 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.569 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.569 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.569 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.569 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.570 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.570 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.570 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.570 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.570 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.570 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.570 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.570 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.570 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.570 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.570 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.570 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.570 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.570 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.570 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.570 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.570 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.570 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.570 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.570 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.570 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.570 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.570 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.570 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.570 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.570 EAL: Hugepages will be freed exactly as allocated. 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: TSC frequency is ~2500000 KHz 00:04:32.570 EAL: Main lcore 0 is ready (tid=7f7144d8da00;cpuset=[0]) 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 0 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.570 00:04:32.570 00:04:32.570 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.570 http://cunit.sourceforge.net/ 00:04:32.570 00:04:32.570 00:04:32.570 Suite: components_suite 00:04:32.570 Test: vtophys_malloc_test ...passed 00:04:32.570 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.570 EAL: Trying to obtain current memory policy. 00:04:32.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.570 EAL: Restoring previous memory policy: 4 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.570 EAL: request: mp_malloc_sync 00:04:32.570 EAL: No shared files mode enabled, IPC is disabled 00:04:32.570 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.829 EAL: request: mp_malloc_sync 00:04:32.829 EAL: No shared files mode enabled, IPC is disabled 00:04:32.829 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.829 EAL: Trying to obtain current memory policy. 00:04:32.829 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.829 EAL: Restoring previous memory policy: 4 00:04:32.829 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.829 EAL: request: mp_malloc_sync 00:04:32.829 EAL: No shared files mode enabled, IPC is disabled 00:04:32.829 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.829 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.089 EAL: request: mp_malloc_sync 00:04:33.089 EAL: No shared files mode enabled, IPC is disabled 00:04:33.089 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.089 EAL: Trying to obtain current memory policy. 00:04:33.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.089 EAL: Restoring previous memory policy: 4 00:04:33.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.089 EAL: request: mp_malloc_sync 00:04:33.089 EAL: No shared files mode enabled, IPC is disabled 00:04:33.089 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.360 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.360 EAL: request: mp_malloc_sync 00:04:33.361 EAL: No shared files mode enabled, IPC is disabled 00:04:33.361 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.361 passed 00:04:33.361 00:04:33.361 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.361 suites 1 1 n/a 0 0 00:04:33.361 tests 2 2 2 0 0 00:04:33.361 asserts 497 497 497 0 n/a 00:04:33.361 00:04:33.361 Elapsed time = 0.963 seconds 00:04:33.361 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.361 EAL: request: mp_malloc_sync 00:04:33.361 EAL: No shared files mode enabled, IPC is disabled 00:04:33.361 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.361 EAL: No shared files mode enabled, IPC is disabled 00:04:33.361 EAL: No shared files mode enabled, IPC is disabled 00:04:33.361 EAL: No shared files mode enabled, IPC is disabled 00:04:33.361 00:04:33.361 real 0m1.081s 00:04:33.361 user 0m0.633s 00:04:33.361 sys 0m0.425s 00:04:33.361 05:29:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:33.361 05:29:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.361 ************************************ 00:04:33.361 END TEST env_vtophys 00:04:33.361 ************************************ 00:04:33.626 05:29:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.626 05:29:23 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:33.626 05:29:23 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:33.626 05:29:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 START TEST env_pci 00:04:33.626 ************************************ 00:04:33.626 05:29:23 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.626 00:04:33.626 00:04:33.626 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.626 http://cunit.sourceforge.net/ 00:04:33.626 00:04:33.626 00:04:33.626 Suite: pci 00:04:33.626 Test: pci_hook ...[2024-05-15 05:29:23.475098] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3243645 has claimed it 00:04:33.626 EAL: Cannot find device (10000:00:01.0) 00:04:33.626 EAL: Failed to attach device on primary process 00:04:33.626 passed 00:04:33.626 00:04:33.626 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.626 suites 1 1 n/a 0 0 00:04:33.626 tests 1 1 1 0 0 00:04:33.626 asserts 25 25 25 0 n/a 00:04:33.626 00:04:33.626 Elapsed time = 0.034 seconds 00:04:33.626 00:04:33.626 real 0m0.052s 00:04:33.626 user 0m0.011s 00:04:33.626 sys 0m0.041s 00:04:33.626 05:29:23 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:33.626 05:29:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 END TEST env_pci 00:04:33.626 ************************************ 00:04:33.626 05:29:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.626 05:29:23 env -- env/env.sh@15 -- # uname 00:04:33.626 05:29:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.626 05:29:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.626 05:29:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.626 05:29:23 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:33.626 05:29:23 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:33.626 05:29:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 START TEST env_dpdk_post_init 00:04:33.626 ************************************ 00:04:33.626 05:29:23 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.626 EAL: Detected CPU lcores: 112 00:04:33.626 EAL: Detected NUMA nodes: 2 00:04:33.626 EAL: Detected static linkage of DPDK 00:04:33.626 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.885 EAL: Selected IOVA mode 'VA' 00:04:33.885 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.885 EAL: VFIO support initialized 00:04:33.885 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.885 EAL: Using IOMMU type 1 (Type 1) 00:04:34.453 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:38.645 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:38.645 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:38.645 Starting DPDK initialization... 00:04:38.645 Starting SPDK post initialization... 00:04:38.645 SPDK NVMe probe 00:04:38.645 Attaching to 0000:d8:00.0 00:04:38.645 Attached to 0000:d8:00.0 00:04:38.645 Cleaning up... 00:04:38.645 00:04:38.645 real 0m4.751s 00:04:38.645 user 0m3.547s 00:04:38.645 sys 0m0.450s 00:04:38.645 05:29:28 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.645 05:29:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 END TEST env_dpdk_post_init 00:04:38.645 ************************************ 00:04:38.645 05:29:28 env -- env/env.sh@26 -- # uname 00:04:38.645 05:29:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.645 05:29:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.645 05:29:28 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:38.645 05:29:28 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:38.645 05:29:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 START TEST env_mem_callbacks 00:04:38.645 ************************************ 00:04:38.645 05:29:28 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.645 EAL: Detected CPU lcores: 112 00:04:38.645 EAL: Detected NUMA nodes: 2 00:04:38.645 EAL: Detected static linkage of DPDK 00:04:38.645 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.645 EAL: Selected IOVA mode 'VA' 00:04:38.645 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.645 EAL: VFIO support initialized 00:04:38.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.645 00:04:38.645 00:04:38.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.645 http://cunit.sourceforge.net/ 00:04:38.645 00:04:38.645 00:04:38.645 Suite: memory 00:04:38.645 Test: test ... 00:04:38.645 register 0x200000200000 2097152 00:04:38.645 malloc 3145728 00:04:38.645 register 0x200000400000 4194304 00:04:38.645 buf 0x200000500000 len 3145728 PASSED 00:04:38.645 malloc 64 00:04:38.645 buf 0x2000004fff40 len 64 PASSED 00:04:38.645 malloc 4194304 00:04:38.645 register 0x200000800000 6291456 00:04:38.645 buf 0x200000a00000 len 4194304 PASSED 00:04:38.645 free 0x200000500000 3145728 00:04:38.645 free 0x2000004fff40 64 00:04:38.645 unregister 0x200000400000 4194304 PASSED 00:04:38.645 free 0x200000a00000 4194304 00:04:38.645 unregister 0x200000800000 6291456 PASSED 00:04:38.645 malloc 8388608 00:04:38.645 register 0x200000400000 10485760 00:04:38.645 buf 0x200000600000 len 8388608 PASSED 00:04:38.645 free 0x200000600000 8388608 00:04:38.645 unregister 0x200000400000 10485760 PASSED 00:04:38.645 passed 00:04:38.645 00:04:38.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.645 suites 1 1 n/a 0 0 00:04:38.645 tests 1 1 1 0 0 00:04:38.645 asserts 15 15 15 0 n/a 00:04:38.645 00:04:38.645 Elapsed time = 0.005 seconds 00:04:38.645 00:04:38.645 real 0m0.065s 00:04:38.645 user 0m0.019s 00:04:38.645 sys 0m0.046s 00:04:38.645 05:29:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.645 05:29:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 END TEST env_mem_callbacks 00:04:38.645 ************************************ 00:04:38.645 00:04:38.645 real 0m6.601s 00:04:38.645 user 0m4.506s 00:04:38.645 sys 0m1.335s 00:04:38.645 05:29:28 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.645 05:29:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 END TEST env 00:04:38.645 ************************************ 00:04:38.645 05:29:28 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.645 05:29:28 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:38.645 05:29:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:38.645 05:29:28 -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 START TEST rpc 00:04:38.645 ************************************ 00:04:38.645 05:29:28 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.904 * Looking for test storage... 00:04:38.904 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:38.904 05:29:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3244683 00:04:38.904 05:29:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.904 05:29:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.904 05:29:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3244683 00:04:38.904 05:29:28 rpc -- common/autotest_common.sh@828 -- # '[' -z 3244683 ']' 00:04:38.904 05:29:28 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.904 05:29:28 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:38.904 05:29:28 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.904 05:29:28 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:38.904 05:29:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.905 [2024-05-15 05:29:28.766885] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:38.905 [2024-05-15 05:29:28.766946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244683 ] 00:04:38.905 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.905 [2024-05-15 05:29:28.836534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.905 [2024-05-15 05:29:28.910350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.905 [2024-05-15 05:29:28.910396] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3244683' to capture a snapshot of events at runtime. 00:04:38.905 [2024-05-15 05:29:28.910406] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.905 [2024-05-15 05:29:28.910414] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.905 [2024-05-15 05:29:28.910437] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3244683 for offline analysis/debug. 00:04:38.905 [2024-05-15 05:29:28.910465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.841 05:29:29 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:39.841 05:29:29 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:39.842 05:29:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:39.842 05:29:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:39.842 05:29:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.842 05:29:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.842 05:29:29 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:39.842 05:29:29 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.842 05:29:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 ************************************ 00:04:39.842 START TEST rpc_integrity 00:04:39.842 ************************************ 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.842 { 00:04:39.842 "name": "Malloc0", 00:04:39.842 "aliases": [ 00:04:39.842 "f7970b26-b6ab-45f8-8af2-38b8fcc46037" 00:04:39.842 ], 00:04:39.842 "product_name": "Malloc disk", 00:04:39.842 "block_size": 512, 00:04:39.842 "num_blocks": 16384, 00:04:39.842 "uuid": "f7970b26-b6ab-45f8-8af2-38b8fcc46037", 00:04:39.842 "assigned_rate_limits": { 00:04:39.842 "rw_ios_per_sec": 0, 00:04:39.842 "rw_mbytes_per_sec": 0, 00:04:39.842 "r_mbytes_per_sec": 0, 00:04:39.842 "w_mbytes_per_sec": 0 00:04:39.842 }, 00:04:39.842 "claimed": false, 00:04:39.842 "zoned": false, 00:04:39.842 "supported_io_types": { 00:04:39.842 "read": true, 00:04:39.842 "write": true, 00:04:39.842 "unmap": true, 00:04:39.842 "write_zeroes": true, 00:04:39.842 "flush": true, 00:04:39.842 "reset": true, 00:04:39.842 "compare": false, 00:04:39.842 "compare_and_write": false, 00:04:39.842 "abort": true, 00:04:39.842 "nvme_admin": false, 00:04:39.842 "nvme_io": false 00:04:39.842 }, 00:04:39.842 "memory_domains": [ 00:04:39.842 { 00:04:39.842 "dma_device_id": "system", 00:04:39.842 "dma_device_type": 1 00:04:39.842 }, 00:04:39.842 { 00:04:39.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.842 "dma_device_type": 2 00:04:39.842 } 00:04:39.842 ], 00:04:39.842 "driver_specific": {} 00:04:39.842 } 00:04:39.842 ]' 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 [2024-05-15 05:29:29.770907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.842 [2024-05-15 05:29:29.770944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.842 [2024-05-15 05:29:29.770965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6022060 00:04:39.842 [2024-05-15 05:29:29.770974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.842 [2024-05-15 05:29:29.771822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.842 [2024-05-15 05:29:29.771848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.842 Passthru0 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.842 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.842 { 00:04:39.842 "name": "Malloc0", 00:04:39.842 "aliases": [ 00:04:39.842 "f7970b26-b6ab-45f8-8af2-38b8fcc46037" 00:04:39.842 ], 00:04:39.842 "product_name": "Malloc disk", 00:04:39.842 "block_size": 512, 00:04:39.842 "num_blocks": 16384, 00:04:39.842 "uuid": "f7970b26-b6ab-45f8-8af2-38b8fcc46037", 00:04:39.842 "assigned_rate_limits": { 00:04:39.842 "rw_ios_per_sec": 0, 00:04:39.842 "rw_mbytes_per_sec": 0, 00:04:39.842 "r_mbytes_per_sec": 0, 00:04:39.842 "w_mbytes_per_sec": 0 00:04:39.842 }, 00:04:39.842 "claimed": true, 00:04:39.842 "claim_type": "exclusive_write", 00:04:39.842 "zoned": false, 00:04:39.842 "supported_io_types": { 00:04:39.842 "read": true, 00:04:39.842 "write": true, 00:04:39.842 "unmap": true, 00:04:39.842 "write_zeroes": true, 00:04:39.842 "flush": true, 00:04:39.842 "reset": true, 00:04:39.842 "compare": false, 00:04:39.842 "compare_and_write": false, 00:04:39.842 "abort": true, 00:04:39.842 "nvme_admin": false, 00:04:39.842 "nvme_io": false 00:04:39.842 }, 00:04:39.842 "memory_domains": [ 00:04:39.842 { 00:04:39.842 "dma_device_id": "system", 00:04:39.842 "dma_device_type": 1 00:04:39.843 }, 00:04:39.843 { 00:04:39.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.843 "dma_device_type": 2 00:04:39.843 } 00:04:39.843 ], 00:04:39.843 "driver_specific": {} 00:04:39.843 }, 00:04:39.843 { 00:04:39.843 "name": "Passthru0", 00:04:39.843 "aliases": [ 00:04:39.843 "71ace5d7-b965-5051-af2e-59c3e397f9a6" 00:04:39.843 ], 00:04:39.843 "product_name": "passthru", 00:04:39.843 "block_size": 512, 00:04:39.843 "num_blocks": 16384, 00:04:39.843 "uuid": "71ace5d7-b965-5051-af2e-59c3e397f9a6", 00:04:39.843 "assigned_rate_limits": { 00:04:39.843 "rw_ios_per_sec": 0, 00:04:39.843 "rw_mbytes_per_sec": 0, 00:04:39.843 "r_mbytes_per_sec": 0, 00:04:39.843 "w_mbytes_per_sec": 0 00:04:39.843 }, 00:04:39.843 "claimed": false, 00:04:39.843 "zoned": false, 00:04:39.843 "supported_io_types": { 00:04:39.843 "read": true, 00:04:39.843 "write": true, 00:04:39.843 "unmap": true, 00:04:39.843 "write_zeroes": true, 00:04:39.843 "flush": true, 00:04:39.843 "reset": true, 00:04:39.843 "compare": false, 00:04:39.843 "compare_and_write": false, 00:04:39.843 "abort": true, 00:04:39.843 "nvme_admin": false, 00:04:39.843 "nvme_io": false 00:04:39.843 }, 00:04:39.843 "memory_domains": [ 00:04:39.843 { 00:04:39.843 "dma_device_id": "system", 00:04:39.843 "dma_device_type": 1 00:04:39.843 }, 00:04:39.843 { 00:04:39.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.843 "dma_device_type": 2 00:04:39.843 } 00:04:39.843 ], 00:04:39.843 "driver_specific": { 00:04:39.843 "passthru": { 00:04:39.843 "name": "Passthru0", 00:04:39.843 "base_bdev_name": "Malloc0" 00:04:39.843 } 00:04:39.843 } 00:04:39.843 } 00:04:39.843 ]' 00:04:39.843 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.843 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.843 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.843 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.843 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.843 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.843 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.843 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.843 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.102 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.102 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.102 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.102 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.102 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.102 05:29:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.102 00:04:40.102 real 0m0.287s 00:04:40.102 user 0m0.179s 00:04:40.102 sys 0m0.045s 00:04:40.102 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.102 05:29:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 ************************************ 00:04:40.102 END TEST rpc_integrity 00:04:40.102 ************************************ 00:04:40.102 05:29:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.102 05:29:29 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:40.102 05:29:29 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:40.102 05:29:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 ************************************ 00:04:40.102 START TEST rpc_plugins 00:04:40.102 ************************************ 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.102 { 00:04:40.102 "name": "Malloc1", 00:04:40.102 "aliases": [ 00:04:40.102 "67f1e301-96c6-46a7-959b-0335132b1e19" 00:04:40.102 ], 00:04:40.102 "product_name": "Malloc disk", 00:04:40.102 "block_size": 4096, 00:04:40.102 "num_blocks": 256, 00:04:40.102 "uuid": "67f1e301-96c6-46a7-959b-0335132b1e19", 00:04:40.102 "assigned_rate_limits": { 00:04:40.102 "rw_ios_per_sec": 0, 00:04:40.102 "rw_mbytes_per_sec": 0, 00:04:40.102 "r_mbytes_per_sec": 0, 00:04:40.102 "w_mbytes_per_sec": 0 00:04:40.102 }, 00:04:40.102 "claimed": false, 00:04:40.102 "zoned": false, 00:04:40.102 "supported_io_types": { 00:04:40.102 "read": true, 00:04:40.102 "write": true, 00:04:40.102 "unmap": true, 00:04:40.102 "write_zeroes": true, 00:04:40.102 "flush": true, 00:04:40.102 "reset": true, 00:04:40.102 "compare": false, 00:04:40.102 "compare_and_write": false, 00:04:40.102 "abort": true, 00:04:40.102 "nvme_admin": false, 00:04:40.102 "nvme_io": false 00:04:40.102 }, 00:04:40.102 "memory_domains": [ 00:04:40.102 { 00:04:40.102 "dma_device_id": "system", 00:04:40.102 "dma_device_type": 1 00:04:40.102 }, 00:04:40.102 { 00:04:40.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.102 "dma_device_type": 2 00:04:40.102 } 00:04:40.102 ], 00:04:40.102 "driver_specific": {} 00:04:40.102 } 00:04:40.102 ]' 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.102 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.361 05:29:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.361 00:04:40.361 real 0m0.139s 00:04:40.361 user 0m0.088s 00:04:40.362 sys 0m0.021s 00:04:40.362 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.362 05:29:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.362 ************************************ 00:04:40.362 END TEST rpc_plugins 00:04:40.362 ************************************ 00:04:40.362 05:29:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.362 05:29:30 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:40.362 05:29:30 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:40.362 05:29:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.362 ************************************ 00:04:40.362 START TEST rpc_trace_cmd_test 00:04:40.362 ************************************ 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.362 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3244683", 00:04:40.362 "tpoint_group_mask": "0x8", 00:04:40.362 "iscsi_conn": { 00:04:40.362 "mask": "0x2", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "scsi": { 00:04:40.362 "mask": "0x4", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "bdev": { 00:04:40.362 "mask": "0x8", 00:04:40.362 "tpoint_mask": "0xffffffffffffffff" 00:04:40.362 }, 00:04:40.362 "nvmf_rdma": { 00:04:40.362 "mask": "0x10", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "nvmf_tcp": { 00:04:40.362 "mask": "0x20", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "ftl": { 00:04:40.362 "mask": "0x40", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "blobfs": { 00:04:40.362 "mask": "0x80", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "dsa": { 00:04:40.362 "mask": "0x200", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "thread": { 00:04:40.362 "mask": "0x400", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "nvme_pcie": { 00:04:40.362 "mask": "0x800", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "iaa": { 00:04:40.362 "mask": "0x1000", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "nvme_tcp": { 00:04:40.362 "mask": "0x2000", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "bdev_nvme": { 00:04:40.362 "mask": "0x4000", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 }, 00:04:40.362 "sock": { 00:04:40.362 "mask": "0x8000", 00:04:40.362 "tpoint_mask": "0x0" 00:04:40.362 } 00:04:40.362 }' 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.362 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.621 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.621 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.621 05:29:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.621 00:04:40.621 real 0m0.230s 00:04:40.621 user 0m0.193s 00:04:40.621 sys 0m0.031s 00:04:40.621 05:29:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.621 05:29:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 ************************************ 00:04:40.621 END TEST rpc_trace_cmd_test 00:04:40.621 ************************************ 00:04:40.621 05:29:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.621 05:29:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.621 05:29:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.621 05:29:30 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:40.621 05:29:30 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:40.621 05:29:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 ************************************ 00:04:40.621 START TEST rpc_daemon_integrity 00:04:40.621 ************************************ 00:04:40.621 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:40.621 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.621 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.621 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.622 { 00:04:40.622 "name": "Malloc2", 00:04:40.622 "aliases": [ 00:04:40.622 "99b8fb81-6cca-4977-8d6d-55af07d39a36" 00:04:40.622 ], 00:04:40.622 "product_name": "Malloc disk", 00:04:40.622 "block_size": 512, 00:04:40.622 "num_blocks": 16384, 00:04:40.622 "uuid": "99b8fb81-6cca-4977-8d6d-55af07d39a36", 00:04:40.622 "assigned_rate_limits": { 00:04:40.622 "rw_ios_per_sec": 0, 00:04:40.622 "rw_mbytes_per_sec": 0, 00:04:40.622 "r_mbytes_per_sec": 0, 00:04:40.622 "w_mbytes_per_sec": 0 00:04:40.622 }, 00:04:40.622 "claimed": false, 00:04:40.622 "zoned": false, 00:04:40.622 "supported_io_types": { 00:04:40.622 "read": true, 00:04:40.622 "write": true, 00:04:40.622 "unmap": true, 00:04:40.622 "write_zeroes": true, 00:04:40.622 "flush": true, 00:04:40.622 "reset": true, 00:04:40.622 "compare": false, 00:04:40.622 "compare_and_write": false, 00:04:40.622 "abort": true, 00:04:40.622 "nvme_admin": false, 00:04:40.622 "nvme_io": false 00:04:40.622 }, 00:04:40.622 "memory_domains": [ 00:04:40.622 { 00:04:40.622 "dma_device_id": "system", 00:04:40.622 "dma_device_type": 1 00:04:40.622 }, 00:04:40.622 { 00:04:40.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.622 "dma_device_type": 2 00:04:40.622 } 00:04:40.622 ], 00:04:40.622 "driver_specific": {} 00:04:40.622 } 00:04:40.622 ]' 00:04:40.622 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.881 [2024-05-15 05:29:30.657242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.881 [2024-05-15 05:29:30.657275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.881 [2024-05-15 05:29:30.657291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6023960 00:04:40.881 [2024-05-15 05:29:30.657301] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.881 [2024-05-15 05:29:30.658006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.881 [2024-05-15 05:29:30.658029] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.881 Passthru0 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.881 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.881 { 00:04:40.881 "name": "Malloc2", 00:04:40.881 "aliases": [ 00:04:40.881 "99b8fb81-6cca-4977-8d6d-55af07d39a36" 00:04:40.881 ], 00:04:40.881 "product_name": "Malloc disk", 00:04:40.881 "block_size": 512, 00:04:40.881 "num_blocks": 16384, 00:04:40.881 "uuid": "99b8fb81-6cca-4977-8d6d-55af07d39a36", 00:04:40.881 "assigned_rate_limits": { 00:04:40.881 "rw_ios_per_sec": 0, 00:04:40.881 "rw_mbytes_per_sec": 0, 00:04:40.881 "r_mbytes_per_sec": 0, 00:04:40.881 "w_mbytes_per_sec": 0 00:04:40.881 }, 00:04:40.881 "claimed": true, 00:04:40.881 "claim_type": "exclusive_write", 00:04:40.881 "zoned": false, 00:04:40.881 "supported_io_types": { 00:04:40.881 "read": true, 00:04:40.881 "write": true, 00:04:40.881 "unmap": true, 00:04:40.881 "write_zeroes": true, 00:04:40.881 "flush": true, 00:04:40.881 "reset": true, 00:04:40.881 "compare": false, 00:04:40.881 "compare_and_write": false, 00:04:40.881 "abort": true, 00:04:40.881 "nvme_admin": false, 00:04:40.881 "nvme_io": false 00:04:40.881 }, 00:04:40.881 "memory_domains": [ 00:04:40.881 { 00:04:40.881 "dma_device_id": "system", 00:04:40.881 "dma_device_type": 1 00:04:40.881 }, 00:04:40.881 { 00:04:40.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.882 "dma_device_type": 2 00:04:40.882 } 00:04:40.882 ], 00:04:40.882 "driver_specific": {} 00:04:40.882 }, 00:04:40.882 { 00:04:40.882 "name": "Passthru0", 00:04:40.882 "aliases": [ 00:04:40.882 "41350645-18fe-5a7d-90ef-e9b0ad716e81" 00:04:40.882 ], 00:04:40.882 "product_name": "passthru", 00:04:40.882 "block_size": 512, 00:04:40.882 "num_blocks": 16384, 00:04:40.882 "uuid": "41350645-18fe-5a7d-90ef-e9b0ad716e81", 00:04:40.882 "assigned_rate_limits": { 00:04:40.882 "rw_ios_per_sec": 0, 00:04:40.882 "rw_mbytes_per_sec": 0, 00:04:40.882 "r_mbytes_per_sec": 0, 00:04:40.882 "w_mbytes_per_sec": 0 00:04:40.882 }, 00:04:40.882 "claimed": false, 00:04:40.882 "zoned": false, 00:04:40.882 "supported_io_types": { 00:04:40.882 "read": true, 00:04:40.882 "write": true, 00:04:40.882 "unmap": true, 00:04:40.882 "write_zeroes": true, 00:04:40.882 "flush": true, 00:04:40.882 "reset": true, 00:04:40.882 "compare": false, 00:04:40.882 "compare_and_write": false, 00:04:40.882 "abort": true, 00:04:40.882 "nvme_admin": false, 00:04:40.882 "nvme_io": false 00:04:40.882 }, 00:04:40.882 "memory_domains": [ 00:04:40.882 { 00:04:40.882 "dma_device_id": "system", 00:04:40.882 "dma_device_type": 1 00:04:40.882 }, 00:04:40.882 { 00:04:40.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.882 "dma_device_type": 2 00:04:40.882 } 00:04:40.882 ], 00:04:40.882 "driver_specific": { 00:04:40.882 "passthru": { 00:04:40.882 "name": "Passthru0", 00:04:40.882 "base_bdev_name": "Malloc2" 00:04:40.882 } 00:04:40.882 } 00:04:40.882 } 00:04:40.882 ]' 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.882 00:04:40.882 real 0m0.267s 00:04:40.882 user 0m0.155s 00:04:40.882 sys 0m0.048s 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.882 05:29:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.882 ************************************ 00:04:40.882 END TEST rpc_daemon_integrity 00:04:40.882 ************************************ 00:04:40.882 05:29:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.882 05:29:30 rpc -- rpc/rpc.sh@84 -- # killprocess 3244683 00:04:40.882 05:29:30 rpc -- common/autotest_common.sh@947 -- # '[' -z 3244683 ']' 00:04:40.882 05:29:30 rpc -- common/autotest_common.sh@951 -- # kill -0 3244683 00:04:40.882 05:29:30 rpc -- common/autotest_common.sh@952 -- # uname 00:04:40.882 05:29:30 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:40.882 05:29:30 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3244683 00:04:41.141 05:29:30 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:41.141 05:29:30 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:41.141 05:29:30 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3244683' 00:04:41.141 killing process with pid 3244683 00:04:41.141 05:29:30 rpc -- common/autotest_common.sh@966 -- # kill 3244683 00:04:41.141 05:29:30 rpc -- common/autotest_common.sh@971 -- # wait 3244683 00:04:41.401 00:04:41.401 real 0m2.574s 00:04:41.401 user 0m3.252s 00:04:41.401 sys 0m0.810s 00:04:41.401 05:29:31 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:41.401 05:29:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.401 ************************************ 00:04:41.401 END TEST rpc 00:04:41.401 ************************************ 00:04:41.401 05:29:31 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.401 05:29:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:41.401 05:29:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:41.401 05:29:31 -- common/autotest_common.sh@10 -- # set +x 00:04:41.401 ************************************ 00:04:41.401 START TEST skip_rpc 00:04:41.401 ************************************ 00:04:41.401 05:29:31 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.401 * Looking for test storage... 00:04:41.401 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:41.401 05:29:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:41.401 05:29:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:41.401 05:29:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.401 05:29:31 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:41.401 05:29:31 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:41.401 05:29:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.661 ************************************ 00:04:41.661 START TEST skip_rpc 00:04:41.661 ************************************ 00:04:41.661 05:29:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:41.661 05:29:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3245383 00:04:41.661 05:29:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.661 05:29:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.661 05:29:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.661 [2024-05-15 05:29:31.469862] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:41.661 [2024-05-15 05:29:31.469948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245383 ] 00:04:41.661 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.661 [2024-05-15 05:29:31.538020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.661 [2024-05-15 05:29:31.609324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3245383 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 3245383 ']' 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 3245383 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3245383 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3245383' 00:04:46.937 killing process with pid 3245383 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 3245383 00:04:46.937 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 3245383 00:04:46.937 00:04:46.937 real 0m5.369s 00:04:46.938 user 0m5.133s 00:04:46.938 sys 0m0.278s 00:04:46.938 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.938 05:29:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.938 ************************************ 00:04:46.938 END TEST skip_rpc 00:04:46.938 ************************************ 00:04:46.938 05:29:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.938 05:29:36 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:46.938 05:29:36 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.938 05:29:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.938 ************************************ 00:04:46.938 START TEST skip_rpc_with_json 00:04:46.938 ************************************ 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3246241 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3246241 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 3246241 ']' 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:46.938 05:29:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.938 [2024-05-15 05:29:36.922680] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:46.938 [2024-05-15 05:29:36.922752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246241 ] 00:04:46.938 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.197 [2024-05-15 05:29:36.994222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.197 [2024-05-15 05:29:37.071930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.765 [2024-05-15 05:29:37.744568] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.765 request: 00:04:47.765 { 00:04:47.765 "trtype": "tcp", 00:04:47.765 "method": "nvmf_get_transports", 00:04:47.765 "req_id": 1 00:04:47.765 } 00:04:47.765 Got JSON-RPC error response 00:04:47.765 response: 00:04:47.765 { 00:04:47.765 "code": -19, 00:04:47.765 "message": "No such device" 00:04:47.765 } 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.765 [2024-05-15 05:29:37.756662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.765 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.025 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:48.025 05:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:48.025 { 00:04:48.025 "subsystems": [ 00:04:48.025 { 00:04:48.025 "subsystem": "scheduler", 00:04:48.025 "config": [ 00:04:48.025 { 00:04:48.025 "method": "framework_set_scheduler", 00:04:48.025 "params": { 00:04:48.025 "name": "static" 00:04:48.025 } 00:04:48.025 } 00:04:48.025 ] 00:04:48.025 }, 00:04:48.025 { 00:04:48.025 "subsystem": "vmd", 00:04:48.025 "config": [] 00:04:48.025 }, 00:04:48.025 { 00:04:48.025 "subsystem": "sock", 00:04:48.025 "config": [ 00:04:48.025 { 00:04:48.025 "method": "sock_impl_set_options", 00:04:48.025 "params": { 00:04:48.025 "impl_name": "posix", 00:04:48.025 "recv_buf_size": 2097152, 00:04:48.025 "send_buf_size": 2097152, 00:04:48.025 "enable_recv_pipe": true, 00:04:48.025 "enable_quickack": false, 00:04:48.025 "enable_placement_id": 0, 00:04:48.025 "enable_zerocopy_send_server": true, 00:04:48.025 "enable_zerocopy_send_client": false, 00:04:48.025 "zerocopy_threshold": 0, 00:04:48.025 "tls_version": 0, 00:04:48.025 "enable_ktls": false 00:04:48.025 } 00:04:48.025 }, 00:04:48.025 { 00:04:48.025 "method": "sock_impl_set_options", 00:04:48.025 "params": { 00:04:48.025 "impl_name": "ssl", 00:04:48.025 "recv_buf_size": 4096, 00:04:48.025 "send_buf_size": 4096, 00:04:48.025 "enable_recv_pipe": true, 00:04:48.025 "enable_quickack": false, 00:04:48.025 "enable_placement_id": 0, 00:04:48.025 "enable_zerocopy_send_server": true, 00:04:48.025 "enable_zerocopy_send_client": false, 00:04:48.025 "zerocopy_threshold": 0, 00:04:48.025 "tls_version": 0, 00:04:48.025 "enable_ktls": false 00:04:48.025 } 00:04:48.025 } 00:04:48.025 ] 00:04:48.025 }, 00:04:48.025 { 00:04:48.025 "subsystem": "iobuf", 00:04:48.025 "config": [ 00:04:48.025 { 00:04:48.025 "method": "iobuf_set_options", 00:04:48.025 "params": { 00:04:48.025 "small_pool_count": 8192, 00:04:48.025 "large_pool_count": 1024, 00:04:48.025 "small_bufsize": 8192, 00:04:48.025 "large_bufsize": 135168 00:04:48.025 } 00:04:48.025 } 00:04:48.026 ] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "keyring", 00:04:48.026 "config": [] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "vfio_user_target", 00:04:48.026 "config": null 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "accel", 00:04:48.026 "config": [ 00:04:48.026 { 00:04:48.026 "method": "accel_set_options", 00:04:48.026 "params": { 00:04:48.026 "small_cache_size": 128, 00:04:48.026 "large_cache_size": 16, 00:04:48.026 "task_count": 2048, 00:04:48.026 "sequence_count": 2048, 00:04:48.026 "buf_count": 2048 00:04:48.026 } 00:04:48.026 } 00:04:48.026 ] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "bdev", 00:04:48.026 "config": [ 00:04:48.026 { 00:04:48.026 "method": "bdev_set_options", 00:04:48.026 "params": { 00:04:48.026 "bdev_io_pool_size": 65535, 00:04:48.026 "bdev_io_cache_size": 256, 00:04:48.026 "bdev_auto_examine": true, 00:04:48.026 "iobuf_small_cache_size": 128, 00:04:48.026 "iobuf_large_cache_size": 16 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "bdev_raid_set_options", 00:04:48.026 "params": { 00:04:48.026 "process_window_size_kb": 1024 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "bdev_nvme_set_options", 00:04:48.026 "params": { 00:04:48.026 "action_on_timeout": "none", 00:04:48.026 "timeout_us": 0, 00:04:48.026 "timeout_admin_us": 0, 00:04:48.026 "keep_alive_timeout_ms": 10000, 00:04:48.026 "arbitration_burst": 0, 00:04:48.026 "low_priority_weight": 0, 00:04:48.026 "medium_priority_weight": 0, 00:04:48.026 "high_priority_weight": 0, 00:04:48.026 "nvme_adminq_poll_period_us": 10000, 00:04:48.026 "nvme_ioq_poll_period_us": 0, 00:04:48.026 "io_queue_requests": 0, 00:04:48.026 "delay_cmd_submit": true, 00:04:48.026 "transport_retry_count": 4, 00:04:48.026 "bdev_retry_count": 3, 00:04:48.026 "transport_ack_timeout": 0, 00:04:48.026 "ctrlr_loss_timeout_sec": 0, 00:04:48.026 "reconnect_delay_sec": 0, 00:04:48.026 "fast_io_fail_timeout_sec": 0, 00:04:48.026 "disable_auto_failback": false, 00:04:48.026 "generate_uuids": false, 00:04:48.026 "transport_tos": 0, 00:04:48.026 "nvme_error_stat": false, 00:04:48.026 "rdma_srq_size": 0, 00:04:48.026 "io_path_stat": false, 00:04:48.026 "allow_accel_sequence": false, 00:04:48.026 "rdma_max_cq_size": 0, 00:04:48.026 "rdma_cm_event_timeout_ms": 0, 00:04:48.026 "dhchap_digests": [ 00:04:48.026 "sha256", 00:04:48.026 "sha384", 00:04:48.026 "sha512" 00:04:48.026 ], 00:04:48.026 "dhchap_dhgroups": [ 00:04:48.026 "null", 00:04:48.026 "ffdhe2048", 00:04:48.026 "ffdhe3072", 00:04:48.026 "ffdhe4096", 00:04:48.026 "ffdhe6144", 00:04:48.026 "ffdhe8192" 00:04:48.026 ] 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "bdev_nvme_set_hotplug", 00:04:48.026 "params": { 00:04:48.026 "period_us": 100000, 00:04:48.026 "enable": false 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "bdev_iscsi_set_options", 00:04:48.026 "params": { 00:04:48.026 "timeout_sec": 30 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "bdev_wait_for_examine" 00:04:48.026 } 00:04:48.026 ] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "nvmf", 00:04:48.026 "config": [ 00:04:48.026 { 00:04:48.026 "method": "nvmf_set_config", 00:04:48.026 "params": { 00:04:48.026 "discovery_filter": "match_any", 00:04:48.026 "admin_cmd_passthru": { 00:04:48.026 "identify_ctrlr": false 00:04:48.026 } 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "nvmf_set_max_subsystems", 00:04:48.026 "params": { 00:04:48.026 "max_subsystems": 1024 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "nvmf_set_crdt", 00:04:48.026 "params": { 00:04:48.026 "crdt1": 0, 00:04:48.026 "crdt2": 0, 00:04:48.026 "crdt3": 0 00:04:48.026 } 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "method": "nvmf_create_transport", 00:04:48.026 "params": { 00:04:48.026 "trtype": "TCP", 00:04:48.026 "max_queue_depth": 128, 00:04:48.026 "max_io_qpairs_per_ctrlr": 127, 00:04:48.026 "in_capsule_data_size": 4096, 00:04:48.026 "max_io_size": 131072, 00:04:48.026 "io_unit_size": 131072, 00:04:48.026 "max_aq_depth": 128, 00:04:48.026 "num_shared_buffers": 511, 00:04:48.026 "buf_cache_size": 4294967295, 00:04:48.026 "dif_insert_or_strip": false, 00:04:48.026 "zcopy": false, 00:04:48.026 "c2h_success": true, 00:04:48.026 "sock_priority": 0, 00:04:48.026 "abort_timeout_sec": 1, 00:04:48.026 "ack_timeout": 0, 00:04:48.026 "data_wr_pool_size": 0 00:04:48.026 } 00:04:48.026 } 00:04:48.026 ] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "nbd", 00:04:48.026 "config": [] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "ublk", 00:04:48.026 "config": [] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "vhost_blk", 00:04:48.026 "config": [] 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "scsi", 00:04:48.026 "config": null 00:04:48.026 }, 00:04:48.026 { 00:04:48.026 "subsystem": "iscsi", 00:04:48.026 "config": [ 00:04:48.026 { 00:04:48.027 "method": "iscsi_set_options", 00:04:48.027 "params": { 00:04:48.027 "node_base": "iqn.2016-06.io.spdk", 00:04:48.027 "max_sessions": 128, 00:04:48.027 "max_connections_per_session": 2, 00:04:48.027 "max_queue_depth": 64, 00:04:48.027 "default_time2wait": 2, 00:04:48.027 "default_time2retain": 20, 00:04:48.027 "first_burst_length": 8192, 00:04:48.027 "immediate_data": true, 00:04:48.027 "allow_duplicated_isid": false, 00:04:48.027 "error_recovery_level": 0, 00:04:48.027 "nop_timeout": 60, 00:04:48.027 "nop_in_interval": 30, 00:04:48.027 "disable_chap": false, 00:04:48.027 "require_chap": false, 00:04:48.027 "mutual_chap": false, 00:04:48.027 "chap_group": 0, 00:04:48.027 "max_large_datain_per_connection": 64, 00:04:48.027 "max_r2t_per_connection": 4, 00:04:48.027 "pdu_pool_size": 36864, 00:04:48.027 "immediate_data_pool_size": 16384, 00:04:48.027 "data_out_pool_size": 2048 00:04:48.027 } 00:04:48.027 } 00:04:48.027 ] 00:04:48.027 }, 00:04:48.027 { 00:04:48.027 "subsystem": "vhost_scsi", 00:04:48.027 "config": [] 00:04:48.027 } 00:04:48.027 ] 00:04:48.027 } 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3246241 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 3246241 ']' 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 3246241 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3246241 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3246241' 00:04:48.027 killing process with pid 3246241 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 3246241 00:04:48.027 05:29:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 3246241 00:04:48.288 05:29:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3246511 00:04:48.288 05:29:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:48.288 05:29:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3246511 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 3246511 ']' 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 3246511 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3246511 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3246511' 00:04:53.600 killing process with pid 3246511 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 3246511 00:04:53.600 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 3246511 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:53.861 00:04:53.861 real 0m6.759s 00:04:53.861 user 0m6.559s 00:04:53.861 sys 0m0.647s 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.861 ************************************ 00:04:53.861 END TEST skip_rpc_with_json 00:04:53.861 ************************************ 00:04:53.861 05:29:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.861 05:29:43 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.861 05:29:43 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.861 05:29:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.861 ************************************ 00:04:53.861 START TEST skip_rpc_with_delay 00:04:53.861 ************************************ 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.861 [2024-05-15 05:29:43.772419] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.861 [2024-05-15 05:29:43.772554] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:53.861 00:04:53.861 real 0m0.046s 00:04:53.861 user 0m0.021s 00:04:53.861 sys 0m0.025s 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.861 05:29:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.861 ************************************ 00:04:53.861 END TEST skip_rpc_with_delay 00:04:53.861 ************************************ 00:04:53.861 05:29:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.861 05:29:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.861 05:29:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.861 05:29:43 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.861 05:29:43 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.861 05:29:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.861 ************************************ 00:04:53.861 START TEST exit_on_failed_rpc_init 00:04:53.861 ************************************ 00:04:53.861 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:04:53.861 05:29:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3247628 00:04:53.861 05:29:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3247628 00:04:53.861 05:29:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.861 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 3247628 ']' 00:04:53.861 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.121 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:54.121 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.121 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:54.121 05:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.121 [2024-05-15 05:29:43.904161] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:54.121 [2024-05-15 05:29:43.904235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247628 ] 00:04:54.121 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.121 [2024-05-15 05:29:43.974600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.122 [2024-05-15 05:29:44.052225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.060 [2024-05-15 05:29:44.750477] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:55.060 [2024-05-15 05:29:44.750540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247643 ] 00:04:55.060 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.060 [2024-05-15 05:29:44.819085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.060 [2024-05-15 05:29:44.892474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.060 [2024-05-15 05:29:44.892558] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.060 [2024-05-15 05:29:44.892571] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.060 [2024-05-15 05:29:44.892579] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3247628 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 3247628 ']' 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 3247628 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:55.060 05:29:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3247628 00:04:55.060 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:55.060 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:55.060 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3247628' 00:04:55.060 killing process with pid 3247628 00:04:55.060 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 3247628 00:04:55.060 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 3247628 00:04:55.320 00:04:55.320 real 0m1.436s 00:04:55.320 user 0m1.604s 00:04:55.320 sys 0m0.436s 00:04:55.320 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:55.320 05:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.320 ************************************ 00:04:55.320 END TEST exit_on_failed_rpc_init 00:04:55.320 ************************************ 00:04:55.579 05:29:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:55.579 00:04:55.579 real 0m14.070s 00:04:55.579 user 0m13.478s 00:04:55.579 sys 0m1.698s 00:04:55.579 05:29:45 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:55.579 05:29:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.579 ************************************ 00:04:55.579 END TEST skip_rpc 00:04:55.579 ************************************ 00:04:55.579 05:29:45 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.579 05:29:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:55.579 05:29:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:55.579 05:29:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.579 ************************************ 00:04:55.579 START TEST rpc_client 00:04:55.579 ************************************ 00:04:55.579 05:29:45 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.579 * Looking for test storage... 00:04:55.579 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:55.579 05:29:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:55.579 OK 00:04:55.579 05:29:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.579 00:04:55.579 real 0m0.130s 00:04:55.579 user 0m0.051s 00:04:55.579 sys 0m0.088s 00:04:55.579 05:29:45 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:55.579 05:29:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.579 ************************************ 00:04:55.579 END TEST rpc_client 00:04:55.579 ************************************ 00:04:55.839 05:29:45 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.839 05:29:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:55.839 05:29:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:55.839 05:29:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.839 ************************************ 00:04:55.839 START TEST json_config 00:04:55.839 ************************************ 00:04:55.839 05:29:45 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:55.839 05:29:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.839 05:29:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.839 05:29:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.839 05:29:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.839 05:29:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.839 05:29:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.839 05:29:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.839 05:29:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@47 -- # : 0 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.839 05:29:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:55.839 WARNING: No tests are enabled so not running JSON configuration tests 00:04:55.839 05:29:45 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:55.839 00:04:55.839 real 0m0.109s 00:04:55.839 user 0m0.053s 00:04:55.839 sys 0m0.057s 00:04:55.839 05:29:45 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:55.839 05:29:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.839 ************************************ 00:04:55.839 END TEST json_config 00:04:55.839 ************************************ 00:04:55.839 05:29:45 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:55.839 05:29:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:55.839 05:29:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:55.839 05:29:45 -- common/autotest_common.sh@10 -- # set +x 00:04:55.839 ************************************ 00:04:55.839 START TEST json_config_extra_key 00:04:55.839 ************************************ 00:04:55.839 05:29:45 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:56.099 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.099 05:29:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:56.099 05:29:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.099 05:29:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.099 05:29:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.100 05:29:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.100 05:29:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.100 05:29:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.100 05:29:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:56.100 05:29:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.100 05:29:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:56.100 INFO: launching applications... 00:04:56.100 05:29:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3248055 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.100 Waiting for target to run... 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3248055 /var/tmp/spdk_tgt.sock 00:04:56.100 05:29:45 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 3248055 ']' 00:04:56.100 05:29:45 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.100 05:29:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:56.100 05:29:45 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:56.100 05:29:45 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.100 05:29:45 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:56.100 05:29:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.100 [2024-05-15 05:29:45.976418] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:56.100 [2024-05-15 05:29:45.976489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248055 ] 00:04:56.100 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.360 [2024-05-15 05:29:46.257565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.360 [2024-05-15 05:29:46.326463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.928 05:29:46 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:56.928 05:29:46 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:04:56.928 05:29:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.928 00:04:56.928 05:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.928 INFO: shutting down applications... 00:04:56.928 05:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.928 05:29:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.928 05:29:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.928 05:29:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3248055 ]] 00:04:56.928 05:29:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3248055 00:04:56.928 05:29:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.929 05:29:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.929 05:29:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3248055 00:04:56.929 05:29:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.497 05:29:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.497 05:29:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.497 05:29:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3248055 00:04:57.497 05:29:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.498 05:29:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:57.498 05:29:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.498 05:29:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.498 SPDK target shutdown done 00:04:57.498 05:29:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:57.498 Success 00:04:57.498 00:04:57.498 real 0m1.452s 00:04:57.498 user 0m1.186s 00:04:57.498 sys 0m0.402s 00:04:57.498 05:29:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:57.498 05:29:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.498 ************************************ 00:04:57.498 END TEST json_config_extra_key 00:04:57.498 ************************************ 00:04:57.498 05:29:47 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.498 05:29:47 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:57.498 05:29:47 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:57.498 05:29:47 -- common/autotest_common.sh@10 -- # set +x 00:04:57.498 ************************************ 00:04:57.498 START TEST alias_rpc 00:04:57.498 ************************************ 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.498 * Looking for test storage... 00:04:57.498 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:57.498 05:29:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.498 05:29:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3248369 00:04:57.498 05:29:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3248369 00:04:57.498 05:29:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 3248369 ']' 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:57.498 05:29:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.498 [2024-05-15 05:29:47.511581] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:57.498 [2024-05-15 05:29:47.511666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248369 ] 00:04:57.757 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.757 [2024-05-15 05:29:47.580712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.757 [2024-05-15 05:29:47.654828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.326 05:29:48 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:58.326 05:29:48 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:04:58.326 05:29:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:58.585 05:29:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3248369 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 3248369 ']' 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 3248369 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3248369 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3248369' 00:04:58.585 killing process with pid 3248369 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@966 -- # kill 3248369 00:04:58.585 05:29:48 alias_rpc -- common/autotest_common.sh@971 -- # wait 3248369 00:04:59.154 00:04:59.154 real 0m1.510s 00:04:59.154 user 0m1.615s 00:04:59.154 sys 0m0.445s 00:04:59.154 05:29:48 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:59.154 05:29:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.154 ************************************ 00:04:59.154 END TEST alias_rpc 00:04:59.154 ************************************ 00:04:59.154 05:29:48 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:59.154 05:29:48 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.154 05:29:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:59.154 05:29:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:59.154 05:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:59.154 ************************************ 00:04:59.154 START TEST spdkcli_tcp 00:04:59.154 ************************************ 00:04:59.154 05:29:48 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.154 * Looking for test storage... 00:04:59.154 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3248694 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3248694 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 3248694 ']' 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:59.154 05:29:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.154 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:59.154 [2024-05-15 05:29:49.107821] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:59.154 [2024-05-15 05:29:49.107886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248694 ] 00:04:59.154 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.414 [2024-05-15 05:29:49.177753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.414 [2024-05-15 05:29:49.257312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.414 [2024-05-15 05:29:49.257314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.983 05:29:49 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:59.983 05:29:49 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:04:59.983 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3248958 00:04:59.983 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.983 05:29:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.243 [ 00:05:00.243 "spdk_get_version", 00:05:00.243 "rpc_get_methods", 00:05:00.243 "trace_get_info", 00:05:00.243 "trace_get_tpoint_group_mask", 00:05:00.244 "trace_disable_tpoint_group", 00:05:00.244 "trace_enable_tpoint_group", 00:05:00.244 "trace_clear_tpoint_mask", 00:05:00.244 "trace_set_tpoint_mask", 00:05:00.244 "vfu_tgt_set_base_path", 00:05:00.244 "framework_get_pci_devices", 00:05:00.244 "framework_get_config", 00:05:00.244 "framework_get_subsystems", 00:05:00.244 "keyring_get_keys", 00:05:00.244 "iobuf_get_stats", 00:05:00.244 "iobuf_set_options", 00:05:00.244 "sock_get_default_impl", 00:05:00.244 "sock_set_default_impl", 00:05:00.244 "sock_impl_set_options", 00:05:00.244 "sock_impl_get_options", 00:05:00.244 "vmd_rescan", 00:05:00.244 "vmd_remove_device", 00:05:00.244 "vmd_enable", 00:05:00.244 "accel_get_stats", 00:05:00.244 "accel_set_options", 00:05:00.244 "accel_set_driver", 00:05:00.244 "accel_crypto_key_destroy", 00:05:00.244 "accel_crypto_keys_get", 00:05:00.244 "accel_crypto_key_create", 00:05:00.244 "accel_assign_opc", 00:05:00.244 "accel_get_module_info", 00:05:00.244 "accel_get_opc_assignments", 00:05:00.244 "notify_get_notifications", 00:05:00.244 "notify_get_types", 00:05:00.244 "bdev_get_histogram", 00:05:00.244 "bdev_enable_histogram", 00:05:00.244 "bdev_set_qos_limit", 00:05:00.244 "bdev_set_qd_sampling_period", 00:05:00.244 "bdev_get_bdevs", 00:05:00.244 "bdev_reset_iostat", 00:05:00.244 "bdev_get_iostat", 00:05:00.244 "bdev_examine", 00:05:00.244 "bdev_wait_for_examine", 00:05:00.244 "bdev_set_options", 00:05:00.244 "scsi_get_devices", 00:05:00.244 "thread_set_cpumask", 00:05:00.244 "framework_get_scheduler", 00:05:00.244 "framework_set_scheduler", 00:05:00.244 "framework_get_reactors", 00:05:00.244 "thread_get_io_channels", 00:05:00.244 "thread_get_pollers", 00:05:00.244 "thread_get_stats", 00:05:00.244 "framework_monitor_context_switch", 00:05:00.244 "spdk_kill_instance", 00:05:00.244 "log_enable_timestamps", 00:05:00.244 "log_get_flags", 00:05:00.244 "log_clear_flag", 00:05:00.244 "log_set_flag", 00:05:00.244 "log_get_level", 00:05:00.244 "log_set_level", 00:05:00.244 "log_get_print_level", 00:05:00.244 "log_set_print_level", 00:05:00.244 "framework_enable_cpumask_locks", 00:05:00.244 "framework_disable_cpumask_locks", 00:05:00.244 "framework_wait_init", 00:05:00.244 "framework_start_init", 00:05:00.244 "virtio_blk_create_transport", 00:05:00.244 "virtio_blk_get_transports", 00:05:00.244 "vhost_controller_set_coalescing", 00:05:00.244 "vhost_get_controllers", 00:05:00.244 "vhost_delete_controller", 00:05:00.244 "vhost_create_blk_controller", 00:05:00.244 "vhost_scsi_controller_remove_target", 00:05:00.244 "vhost_scsi_controller_add_target", 00:05:00.244 "vhost_start_scsi_controller", 00:05:00.244 "vhost_create_scsi_controller", 00:05:00.244 "ublk_recover_disk", 00:05:00.244 "ublk_get_disks", 00:05:00.244 "ublk_stop_disk", 00:05:00.244 "ublk_start_disk", 00:05:00.244 "ublk_destroy_target", 00:05:00.244 "ublk_create_target", 00:05:00.244 "nbd_get_disks", 00:05:00.244 "nbd_stop_disk", 00:05:00.244 "nbd_start_disk", 00:05:00.244 "env_dpdk_get_mem_stats", 00:05:00.244 "nvmf_stop_mdns_prr", 00:05:00.244 "nvmf_publish_mdns_prr", 00:05:00.244 "nvmf_subsystem_get_listeners", 00:05:00.244 "nvmf_subsystem_get_qpairs", 00:05:00.244 "nvmf_subsystem_get_controllers", 00:05:00.244 "nvmf_get_stats", 00:05:00.244 "nvmf_get_transports", 00:05:00.244 "nvmf_create_transport", 00:05:00.244 "nvmf_get_targets", 00:05:00.244 "nvmf_delete_target", 00:05:00.244 "nvmf_create_target", 00:05:00.244 "nvmf_subsystem_allow_any_host", 00:05:00.244 "nvmf_subsystem_remove_host", 00:05:00.244 "nvmf_subsystem_add_host", 00:05:00.244 "nvmf_ns_remove_host", 00:05:00.244 "nvmf_ns_add_host", 00:05:00.244 "nvmf_subsystem_remove_ns", 00:05:00.244 "nvmf_subsystem_add_ns", 00:05:00.244 "nvmf_subsystem_listener_set_ana_state", 00:05:00.244 "nvmf_discovery_get_referrals", 00:05:00.244 "nvmf_discovery_remove_referral", 00:05:00.244 "nvmf_discovery_add_referral", 00:05:00.244 "nvmf_subsystem_remove_listener", 00:05:00.244 "nvmf_subsystem_add_listener", 00:05:00.244 "nvmf_delete_subsystem", 00:05:00.244 "nvmf_create_subsystem", 00:05:00.244 "nvmf_get_subsystems", 00:05:00.244 "nvmf_set_crdt", 00:05:00.244 "nvmf_set_config", 00:05:00.244 "nvmf_set_max_subsystems", 00:05:00.244 "iscsi_get_histogram", 00:05:00.244 "iscsi_enable_histogram", 00:05:00.244 "iscsi_set_options", 00:05:00.244 "iscsi_get_auth_groups", 00:05:00.244 "iscsi_auth_group_remove_secret", 00:05:00.244 "iscsi_auth_group_add_secret", 00:05:00.244 "iscsi_delete_auth_group", 00:05:00.244 "iscsi_create_auth_group", 00:05:00.244 "iscsi_set_discovery_auth", 00:05:00.244 "iscsi_get_options", 00:05:00.244 "iscsi_target_node_request_logout", 00:05:00.244 "iscsi_target_node_set_redirect", 00:05:00.244 "iscsi_target_node_set_auth", 00:05:00.244 "iscsi_target_node_add_lun", 00:05:00.244 "iscsi_get_stats", 00:05:00.244 "iscsi_get_connections", 00:05:00.244 "iscsi_portal_group_set_auth", 00:05:00.244 "iscsi_start_portal_group", 00:05:00.244 "iscsi_delete_portal_group", 00:05:00.244 "iscsi_create_portal_group", 00:05:00.244 "iscsi_get_portal_groups", 00:05:00.244 "iscsi_delete_target_node", 00:05:00.244 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.244 "iscsi_target_node_add_pg_ig_maps", 00:05:00.244 "iscsi_create_target_node", 00:05:00.244 "iscsi_get_target_nodes", 00:05:00.244 "iscsi_delete_initiator_group", 00:05:00.244 "iscsi_initiator_group_remove_initiators", 00:05:00.244 "iscsi_initiator_group_add_initiators", 00:05:00.244 "iscsi_create_initiator_group", 00:05:00.244 "iscsi_get_initiator_groups", 00:05:00.244 "keyring_file_remove_key", 00:05:00.244 "keyring_file_add_key", 00:05:00.244 "vfu_virtio_create_scsi_endpoint", 00:05:00.244 "vfu_virtio_scsi_remove_target", 00:05:00.244 "vfu_virtio_scsi_add_target", 00:05:00.244 "vfu_virtio_create_blk_endpoint", 00:05:00.244 "vfu_virtio_delete_endpoint", 00:05:00.244 "iaa_scan_accel_module", 00:05:00.244 "dsa_scan_accel_module", 00:05:00.244 "ioat_scan_accel_module", 00:05:00.244 "accel_error_inject_error", 00:05:00.244 "bdev_iscsi_delete", 00:05:00.244 "bdev_iscsi_create", 00:05:00.244 "bdev_iscsi_set_options", 00:05:00.244 "bdev_virtio_attach_controller", 00:05:00.244 "bdev_virtio_scsi_get_devices", 00:05:00.244 "bdev_virtio_detach_controller", 00:05:00.244 "bdev_virtio_blk_set_hotplug", 00:05:00.244 "bdev_ftl_set_property", 00:05:00.244 "bdev_ftl_get_properties", 00:05:00.244 "bdev_ftl_get_stats", 00:05:00.244 "bdev_ftl_unmap", 00:05:00.244 "bdev_ftl_unload", 00:05:00.244 "bdev_ftl_delete", 00:05:00.244 "bdev_ftl_load", 00:05:00.244 "bdev_ftl_create", 00:05:00.244 "bdev_aio_delete", 00:05:00.244 "bdev_aio_rescan", 00:05:00.244 "bdev_aio_create", 00:05:00.244 "blobfs_create", 00:05:00.244 "blobfs_detect", 00:05:00.244 "blobfs_set_cache_size", 00:05:00.244 "bdev_zone_block_delete", 00:05:00.244 "bdev_zone_block_create", 00:05:00.244 "bdev_delay_delete", 00:05:00.244 "bdev_delay_create", 00:05:00.244 "bdev_delay_update_latency", 00:05:00.244 "bdev_split_delete", 00:05:00.244 "bdev_split_create", 00:05:00.244 "bdev_error_inject_error", 00:05:00.244 "bdev_error_delete", 00:05:00.244 "bdev_error_create", 00:05:00.244 "bdev_raid_set_options", 00:05:00.244 "bdev_raid_remove_base_bdev", 00:05:00.244 "bdev_raid_add_base_bdev", 00:05:00.244 "bdev_raid_delete", 00:05:00.244 "bdev_raid_create", 00:05:00.244 "bdev_raid_get_bdevs", 00:05:00.244 "bdev_lvol_check_shallow_copy", 00:05:00.244 "bdev_lvol_start_shallow_copy", 00:05:00.244 "bdev_lvol_grow_lvstore", 00:05:00.244 "bdev_lvol_get_lvols", 00:05:00.244 "bdev_lvol_get_lvstores", 00:05:00.244 "bdev_lvol_delete", 00:05:00.244 "bdev_lvol_set_read_only", 00:05:00.244 "bdev_lvol_resize", 00:05:00.244 "bdev_lvol_decouple_parent", 00:05:00.244 "bdev_lvol_inflate", 00:05:00.244 "bdev_lvol_rename", 00:05:00.244 "bdev_lvol_clone_bdev", 00:05:00.244 "bdev_lvol_clone", 00:05:00.244 "bdev_lvol_snapshot", 00:05:00.244 "bdev_lvol_create", 00:05:00.244 "bdev_lvol_delete_lvstore", 00:05:00.244 "bdev_lvol_rename_lvstore", 00:05:00.244 "bdev_lvol_create_lvstore", 00:05:00.244 "bdev_passthru_delete", 00:05:00.244 "bdev_passthru_create", 00:05:00.244 "bdev_nvme_cuse_unregister", 00:05:00.244 "bdev_nvme_cuse_register", 00:05:00.244 "bdev_opal_new_user", 00:05:00.244 "bdev_opal_set_lock_state", 00:05:00.244 "bdev_opal_delete", 00:05:00.244 "bdev_opal_get_info", 00:05:00.244 "bdev_opal_create", 00:05:00.244 "bdev_nvme_opal_revert", 00:05:00.244 "bdev_nvme_opal_init", 00:05:00.244 "bdev_nvme_send_cmd", 00:05:00.244 "bdev_nvme_get_path_iostat", 00:05:00.244 "bdev_nvme_get_mdns_discovery_info", 00:05:00.244 "bdev_nvme_stop_mdns_discovery", 00:05:00.244 "bdev_nvme_start_mdns_discovery", 00:05:00.244 "bdev_nvme_set_multipath_policy", 00:05:00.244 "bdev_nvme_set_preferred_path", 00:05:00.244 "bdev_nvme_get_io_paths", 00:05:00.244 "bdev_nvme_remove_error_injection", 00:05:00.244 "bdev_nvme_add_error_injection", 00:05:00.244 "bdev_nvme_get_discovery_info", 00:05:00.244 "bdev_nvme_stop_discovery", 00:05:00.244 "bdev_nvme_start_discovery", 00:05:00.244 "bdev_nvme_get_controller_health_info", 00:05:00.244 "bdev_nvme_disable_controller", 00:05:00.244 "bdev_nvme_enable_controller", 00:05:00.244 "bdev_nvme_reset_controller", 00:05:00.244 "bdev_nvme_get_transport_statistics", 00:05:00.244 "bdev_nvme_apply_firmware", 00:05:00.244 "bdev_nvme_detach_controller", 00:05:00.244 "bdev_nvme_get_controllers", 00:05:00.244 "bdev_nvme_attach_controller", 00:05:00.244 "bdev_nvme_set_hotplug", 00:05:00.244 "bdev_nvme_set_options", 00:05:00.244 "bdev_null_resize", 00:05:00.244 "bdev_null_delete", 00:05:00.244 "bdev_null_create", 00:05:00.244 "bdev_malloc_delete", 00:05:00.244 "bdev_malloc_create" 00:05:00.244 ] 00:05:00.244 05:29:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.244 05:29:50 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.245 05:29:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.245 05:29:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3248694 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 3248694 ']' 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 3248694 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3248694 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3248694' 00:05:00.245 killing process with pid 3248694 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 3248694 00:05:00.245 05:29:50 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 3248694 00:05:00.505 00:05:00.505 real 0m1.507s 00:05:00.505 user 0m2.769s 00:05:00.505 sys 0m0.461s 00:05:00.505 05:29:50 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:00.505 05:29:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.505 ************************************ 00:05:00.505 END TEST spdkcli_tcp 00:05:00.505 ************************************ 00:05:00.505 05:29:50 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.505 05:29:50 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.505 05:29:50 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.505 05:29:50 -- common/autotest_common.sh@10 -- # set +x 00:05:00.763 ************************************ 00:05:00.763 START TEST dpdk_mem_utility 00:05:00.763 ************************************ 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.763 * Looking for test storage... 00:05:00.763 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:00.763 05:29:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.763 05:29:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3249031 00:05:00.763 05:29:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3249031 00:05:00.763 05:29:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 3249031 ']' 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.763 05:29:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.763 [2024-05-15 05:29:50.701334] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:00.763 [2024-05-15 05:29:50.701425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249031 ] 00:05:00.763 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.763 [2024-05-15 05:29:50.772646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.022 [2024-05-15 05:29:50.853522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.591 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:01.591 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:05:01.591 05:29:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.591 05:29:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.591 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.591 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.591 { 00:05:01.591 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.591 } 00:05:01.591 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.591 05:29:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:01.591 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:01.591 1 heaps totaling size 814.000000 MiB 00:05:01.591 size: 814.000000 MiB heap id: 0 00:05:01.591 end heaps---------- 00:05:01.591 8 mempools totaling size 598.116089 MiB 00:05:01.591 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.591 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.591 size: 84.521057 MiB name: bdev_io_3249031 00:05:01.591 size: 51.011292 MiB name: evtpool_3249031 00:05:01.591 size: 50.003479 MiB name: msgpool_3249031 00:05:01.591 size: 21.763794 MiB name: PDU_Pool 00:05:01.591 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.591 size: 0.026123 MiB name: Session_Pool 00:05:01.591 end mempools------- 00:05:01.591 6 memzones totaling size 4.142822 MiB 00:05:01.591 size: 1.000366 MiB name: RG_ring_0_3249031 00:05:01.591 size: 1.000366 MiB name: RG_ring_1_3249031 00:05:01.591 size: 1.000366 MiB name: RG_ring_4_3249031 00:05:01.591 size: 1.000366 MiB name: RG_ring_5_3249031 00:05:01.591 size: 0.125366 MiB name: RG_ring_2_3249031 00:05:01.591 size: 0.015991 MiB name: RG_ring_3_3249031 00:05:01.591 end memzones------- 00:05:01.591 05:29:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.851 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:01.851 list of free elements. size: 12.519348 MiB 00:05:01.851 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:01.851 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:01.851 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:01.851 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:01.851 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:01.851 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:01.851 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:01.851 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:01.851 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:01.851 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:01.851 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:01.851 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:01.851 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:01.851 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:01.851 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:01.851 list of standard malloc elements. size: 199.218079 MiB 00:05:01.851 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:01.851 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:01.851 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:01.851 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:01.851 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:01.851 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:01.851 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:01.852 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:01.852 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:01.852 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:01.852 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:01.852 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:01.852 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:01.852 list of memzone associated elements. size: 602.262573 MiB 00:05:01.852 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:01.852 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.852 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:01.852 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.852 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:01.852 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3249031_0 00:05:01.852 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:01.852 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3249031_0 00:05:01.852 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:01.852 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3249031_0 00:05:01.852 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:01.852 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.852 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:01.852 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.852 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:01.852 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3249031 00:05:01.852 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:01.852 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3249031 00:05:01.852 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:01.852 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3249031 00:05:01.852 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:01.852 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.852 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:01.852 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.852 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:01.852 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.852 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:01.852 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.852 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:01.852 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3249031 00:05:01.852 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:01.852 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3249031 00:05:01.852 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:01.852 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3249031 00:05:01.852 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:01.852 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3249031 00:05:01.852 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:01.852 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3249031 00:05:01.852 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:01.852 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.852 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:01.852 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.852 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:01.852 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.852 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:01.852 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3249031 00:05:01.852 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:01.852 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.852 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:01.852 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.852 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:01.852 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3249031 00:05:01.852 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:01.852 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.852 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:01.852 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3249031 00:05:01.852 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:01.852 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3249031 00:05:01.852 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:01.852 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.852 05:29:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.852 05:29:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3249031 00:05:01.852 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 3249031 ']' 00:05:01.852 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 3249031 00:05:01.852 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:05:01.852 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:01.852 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3249031 00:05:01.853 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:01.853 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:01.853 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3249031' 00:05:01.853 killing process with pid 3249031 00:05:01.853 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 3249031 00:05:01.853 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 3249031 00:05:02.112 00:05:02.112 real 0m1.404s 00:05:02.112 user 0m1.433s 00:05:02.112 sys 0m0.443s 00:05:02.112 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:02.112 05:29:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.112 ************************************ 00:05:02.112 END TEST dpdk_mem_utility 00:05:02.112 ************************************ 00:05:02.112 05:29:52 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:02.112 05:29:52 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:02.112 05:29:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:02.112 05:29:52 -- common/autotest_common.sh@10 -- # set +x 00:05:02.112 ************************************ 00:05:02.112 START TEST event 00:05:02.112 ************************************ 00:05:02.112 05:29:52 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:02.371 * Looking for test storage... 00:05:02.371 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:02.371 05:29:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:02.371 05:29:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:02.371 05:29:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.371 05:29:52 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:02.371 05:29:52 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:02.372 05:29:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.372 ************************************ 00:05:02.372 START TEST event_perf 00:05:02.372 ************************************ 00:05:02.372 05:29:52 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:02.372 Running I/O for 1 seconds...[2024-05-15 05:29:52.219398] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:02.372 [2024-05-15 05:29:52.219478] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249355 ] 00:05:02.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.372 [2024-05-15 05:29:52.292260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.372 [2024-05-15 05:29:52.367097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.372 [2024-05-15 05:29:52.367194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.372 [2024-05-15 05:29:52.367277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.372 [2024-05-15 05:29:52.367279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.773 Running I/O for 1 seconds... 00:05:03.773 lcore 0: 200054 00:05:03.773 lcore 1: 200056 00:05:03.773 lcore 2: 200053 00:05:03.773 lcore 3: 200055 00:05:03.773 done. 00:05:03.773 00:05:03.773 real 0m1.232s 00:05:03.773 user 0m4.134s 00:05:03.773 sys 0m0.094s 00:05:03.773 05:29:53 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:03.773 05:29:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.773 ************************************ 00:05:03.773 END TEST event_perf 00:05:03.773 ************************************ 00:05:03.773 05:29:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.773 05:29:53 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:03.773 05:29:53 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:03.773 05:29:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.773 ************************************ 00:05:03.773 START TEST event_reactor 00:05:03.773 ************************************ 00:05:03.773 05:29:53 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:03.773 [2024-05-15 05:29:53.537298] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:03.773 [2024-05-15 05:29:53.537387] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249642 ] 00:05:03.773 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.773 [2024-05-15 05:29:53.609504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.773 [2024-05-15 05:29:53.679817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.151 test_start 00:05:05.151 oneshot 00:05:05.151 tick 100 00:05:05.151 tick 100 00:05:05.152 tick 250 00:05:05.152 tick 100 00:05:05.152 tick 100 00:05:05.152 tick 100 00:05:05.152 tick 250 00:05:05.152 tick 500 00:05:05.152 tick 100 00:05:05.152 tick 100 00:05:05.152 tick 250 00:05:05.152 tick 100 00:05:05.152 tick 100 00:05:05.152 test_end 00:05:05.152 00:05:05.152 real 0m1.224s 00:05:05.152 user 0m1.135s 00:05:05.152 sys 0m0.084s 00:05:05.152 05:29:54 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:05.152 05:29:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.152 ************************************ 00:05:05.152 END TEST event_reactor 00:05:05.152 ************************************ 00:05:05.152 05:29:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.152 05:29:54 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:05.152 05:29:54 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:05.152 05:29:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.152 ************************************ 00:05:05.152 START TEST event_reactor_perf 00:05:05.152 ************************************ 00:05:05.152 05:29:54 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.152 [2024-05-15 05:29:54.851449] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:05.152 [2024-05-15 05:29:54.851529] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249927 ] 00:05:05.152 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.152 [2024-05-15 05:29:54.924001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.152 [2024-05-15 05:29:54.994737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.089 test_start 00:05:06.089 test_end 00:05:06.089 Performance: 968955 events per second 00:05:06.089 00:05:06.089 real 0m1.226s 00:05:06.089 user 0m1.129s 00:05:06.089 sys 0m0.093s 00:05:06.089 05:29:56 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:06.089 05:29:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.089 ************************************ 00:05:06.089 END TEST event_reactor_perf 00:05:06.089 ************************************ 00:05:06.089 05:29:56 event -- event/event.sh@49 -- # uname -s 00:05:06.089 05:29:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.089 05:29:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.089 05:29:56 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:06.089 05:29:56 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:06.089 05:29:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.348 ************************************ 00:05:06.348 START TEST event_scheduler 00:05:06.348 ************************************ 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:06.348 * Looking for test storage... 00:05:06.348 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:06.348 05:29:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.348 05:29:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3250244 00:05:06.348 05:29:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.348 05:29:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.348 05:29:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3250244 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 3250244 ']' 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:06.348 05:29:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.348 [2024-05-15 05:29:56.282868] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:06.348 [2024-05-15 05:29:56.282955] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250244 ] 00:05:06.348 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.348 [2024-05-15 05:29:56.351556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.606 [2024-05-15 05:29:56.428460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.606 [2024-05-15 05:29:56.428544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.606 [2024-05-15 05:29:56.428628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.606 [2024-05-15 05:29:56.428630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.174 05:29:57 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:07.174 05:29:57 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:05:07.174 05:29:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.174 05:29:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.174 05:29:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.174 POWER: Env isn't set yet! 00:05:07.174 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:07.174 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.174 POWER: Attempting to initialise PSTAT power management... 00:05:07.174 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:07.174 POWER: Initialized successfully for lcore 0 power management 00:05:07.174 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:07.174 POWER: Initialized successfully for lcore 1 power management 00:05:07.433 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:07.433 POWER: Initialized successfully for lcore 2 power management 00:05:07.433 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:07.433 POWER: Initialized successfully for lcore 3 power management 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.433 05:29:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.433 [2024-05-15 05:29:57.283141] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.433 05:29:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:07.433 05:29:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.433 ************************************ 00:05:07.433 START TEST scheduler_create_thread 00:05:07.433 ************************************ 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.433 2 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.433 3 00:05:07.433 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 4 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 5 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 6 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 7 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 8 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 9 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 10 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.434 05:29:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.810 05:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:08.810 05:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:08.810 05:29:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:08.810 05:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:08.810 05:29:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.745 05:29:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:09.745 05:29:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.745 05:29:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:09.745 05:29:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.681 05:30:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:10.681 05:30:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:10.681 05:30:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:10.681 05:30:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:10.681 05:30:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.247 05:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:11.247 00:05:11.247 real 0m3.892s 00:05:11.247 user 0m0.026s 00:05:11.247 sys 0m0.005s 00:05:11.247 05:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:11.247 05:30:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.247 ************************************ 00:05:11.247 END TEST scheduler_create_thread 00:05:11.247 ************************************ 00:05:11.247 05:30:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:11.247 05:30:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3250244 00:05:11.247 05:30:01 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 3250244 ']' 00:05:11.247 05:30:01 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 3250244 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3250244 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3250244' 00:05:11.507 killing process with pid 3250244 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 3250244 00:05:11.507 05:30:01 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 3250244 00:05:11.845 [2024-05-15 05:30:01.602797] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:11.845 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:11.845 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:11.845 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:11.845 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:11.845 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:11.845 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:11.845 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:11.845 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:12.104 00:05:12.104 real 0m5.711s 00:05:12.104 user 0m12.942s 00:05:12.104 sys 0m0.415s 00:05:12.104 05:30:01 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:12.104 05:30:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.104 ************************************ 00:05:12.104 END TEST event_scheduler 00:05:12.104 ************************************ 00:05:12.104 05:30:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.104 05:30:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.104 05:30:01 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:12.104 05:30:01 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:12.104 05:30:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.104 ************************************ 00:05:12.104 START TEST app_repeat 00:05:12.104 ************************************ 00:05:12.104 05:30:01 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3251415 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3251415' 00:05:12.104 Process app_repeat pid: 3251415 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.104 spdk_app_start Round 0 00:05:12.104 05:30:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3251415 /var/tmp/spdk-nbd.sock 00:05:12.105 05:30:01 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3251415 ']' 00:05:12.105 05:30:01 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.105 05:30:01 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:12.105 05:30:01 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.105 05:30:01 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:12.105 05:30:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.105 [2024-05-15 05:30:01.995501] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:12.105 [2024-05-15 05:30:01.995583] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251415 ] 00:05:12.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.105 [2024-05-15 05:30:02.067957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.461 [2024-05-15 05:30:02.146887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.461 [2024-05-15 05:30:02.146890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.026 05:30:02 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:13.026 05:30:02 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:13.026 05:30:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.026 Malloc0 00:05:13.026 05:30:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.294 Malloc1 00:05:13.294 05:30:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.294 05:30:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.554 /dev/nbd0 00:05:13.554 05:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.554 05:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.554 1+0 records in 00:05:13.554 1+0 records out 00:05:13.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259365 s, 15.8 MB/s 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:13.554 05:30:03 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:13.554 05:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.554 05:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.554 05:30:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.813 /dev/nbd1 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.813 1+0 records in 00:05:13.813 1+0 records out 00:05:13.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230852 s, 17.7 MB/s 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:13.813 05:30:03 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.813 { 00:05:13.813 "nbd_device": "/dev/nbd0", 00:05:13.813 "bdev_name": "Malloc0" 00:05:13.813 }, 00:05:13.813 { 00:05:13.813 "nbd_device": "/dev/nbd1", 00:05:13.813 "bdev_name": "Malloc1" 00:05:13.813 } 00:05:13.813 ]' 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.813 { 00:05:13.813 "nbd_device": "/dev/nbd0", 00:05:13.813 "bdev_name": "Malloc0" 00:05:13.813 }, 00:05:13.813 { 00:05:13.813 "nbd_device": "/dev/nbd1", 00:05:13.813 "bdev_name": "Malloc1" 00:05:13.813 } 00:05:13.813 ]' 00:05:13.813 05:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.072 /dev/nbd1' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.072 /dev/nbd1' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.072 256+0 records in 00:05:14.072 256+0 records out 00:05:14.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113614 s, 92.3 MB/s 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.072 256+0 records in 00:05:14.072 256+0 records out 00:05:14.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203329 s, 51.6 MB/s 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.072 256+0 records in 00:05:14.072 256+0 records out 00:05:14.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216585 s, 48.4 MB/s 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.072 05:30:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.332 05:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.591 05:30:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.591 05:30:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.850 05:30:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.110 [2024-05-15 05:30:04.930956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.110 [2024-05-15 05:30:04.998067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.110 [2024-05-15 05:30:04.998070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.110 [2024-05-15 05:30:05.038778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.110 [2024-05-15 05:30:05.038825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.402 05:30:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.402 05:30:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:18.402 spdk_app_start Round 1 00:05:18.402 05:30:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3251415 /var/tmp/spdk-nbd.sock 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3251415 ']' 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:18.402 05:30:07 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:18.402 05:30:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.402 Malloc0 00:05:18.402 05:30:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.402 Malloc1 00:05:18.402 05:30:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.402 05:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.662 /dev/nbd0 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.662 1+0 records in 00:05:18.662 1+0 records out 00:05:18.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214172 s, 19.1 MB/s 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.662 /dev/nbd1 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.662 05:30:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:18.662 05:30:08 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.921 1+0 records in 00:05:18.921 1+0 records out 00:05:18.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285024 s, 14.4 MB/s 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:18.921 05:30:08 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.921 { 00:05:18.921 "nbd_device": "/dev/nbd0", 00:05:18.921 "bdev_name": "Malloc0" 00:05:18.921 }, 00:05:18.921 { 00:05:18.921 "nbd_device": "/dev/nbd1", 00:05:18.921 "bdev_name": "Malloc1" 00:05:18.921 } 00:05:18.921 ]' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.921 { 00:05:18.921 "nbd_device": "/dev/nbd0", 00:05:18.921 "bdev_name": "Malloc0" 00:05:18.921 }, 00:05:18.921 { 00:05:18.921 "nbd_device": "/dev/nbd1", 00:05:18.921 "bdev_name": "Malloc1" 00:05:18.921 } 00:05:18.921 ]' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.921 /dev/nbd1' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.921 /dev/nbd1' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.921 05:30:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.180 256+0 records in 00:05:19.181 256+0 records out 00:05:19.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109396 s, 95.9 MB/s 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.181 256+0 records in 00:05:19.181 256+0 records out 00:05:19.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202704 s, 51.7 MB/s 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.181 256+0 records in 00:05:19.181 256+0 records out 00:05:19.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220155 s, 47.6 MB/s 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.181 05:30:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.181 05:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.440 05:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.700 05:30:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.700 05:30:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.959 05:30:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.218 [2024-05-15 05:30:10.005041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.218 [2024-05-15 05:30:10.091263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.218 [2024-05-15 05:30:10.091266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.218 [2024-05-15 05:30:10.133437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.218 [2024-05-15 05:30:10.133483] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.505 05:30:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.505 05:30:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:23.505 spdk_app_start Round 2 00:05:23.505 05:30:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3251415 /var/tmp/spdk-nbd.sock 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3251415 ']' 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:23.505 05:30:12 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:23.505 05:30:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.505 Malloc0 00:05:23.505 05:30:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.505 Malloc1 00:05:23.505 05:30:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.505 05:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.506 05:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.506 05:30:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.506 /dev/nbd0 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.765 1+0 records in 00:05:23.765 1+0 records out 00:05:23.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022394 s, 18.3 MB/s 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.765 /dev/nbd1 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.765 1+0 records in 00:05:23.765 1+0 records out 00:05:23.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162165 s, 25.3 MB/s 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:23.765 05:30:13 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.765 05:30:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.024 05:30:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.025 { 00:05:24.025 "nbd_device": "/dev/nbd0", 00:05:24.025 "bdev_name": "Malloc0" 00:05:24.025 }, 00:05:24.025 { 00:05:24.025 "nbd_device": "/dev/nbd1", 00:05:24.025 "bdev_name": "Malloc1" 00:05:24.025 } 00:05:24.025 ]' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.025 { 00:05:24.025 "nbd_device": "/dev/nbd0", 00:05:24.025 "bdev_name": "Malloc0" 00:05:24.025 }, 00:05:24.025 { 00:05:24.025 "nbd_device": "/dev/nbd1", 00:05:24.025 "bdev_name": "Malloc1" 00:05:24.025 } 00:05:24.025 ]' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.025 /dev/nbd1' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.025 /dev/nbd1' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.025 256+0 records in 00:05:24.025 256+0 records out 00:05:24.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109703 s, 95.6 MB/s 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.025 05:30:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.025 256+0 records in 00:05:24.025 256+0 records out 00:05:24.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205246 s, 51.1 MB/s 00:05:24.025 05:30:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.025 05:30:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.284 256+0 records in 00:05:24.284 256+0 records out 00:05:24.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215942 s, 48.6 MB/s 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.284 05:30:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.285 05:30:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.544 05:30:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.803 05:30:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.803 05:30:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.062 05:30:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.062 [2024-05-15 05:30:15.056564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.322 [2024-05-15 05:30:15.124183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.322 [2024-05-15 05:30:15.124187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.322 [2024-05-15 05:30:15.166008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.322 [2024-05-15 05:30:15.166052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.856 05:30:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3251415 /var/tmp/spdk-nbd.sock 00:05:27.856 05:30:17 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3251415 ']' 00:05:27.856 05:30:17 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.857 05:30:17 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:27.857 05:30:17 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.857 05:30:17 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:27.857 05:30:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:28.116 05:30:18 event.app_repeat -- event/event.sh@39 -- # killprocess 3251415 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 3251415 ']' 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 3251415 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3251415 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3251415' 00:05:28.116 killing process with pid 3251415 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@966 -- # kill 3251415 00:05:28.116 05:30:18 event.app_repeat -- common/autotest_common.sh@971 -- # wait 3251415 00:05:28.376 spdk_app_start is called in Round 0. 00:05:28.376 Shutdown signal received, stop current app iteration 00:05:28.376 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:05:28.376 spdk_app_start is called in Round 1. 00:05:28.376 Shutdown signal received, stop current app iteration 00:05:28.376 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:05:28.376 spdk_app_start is called in Round 2. 00:05:28.376 Shutdown signal received, stop current app iteration 00:05:28.376 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:05:28.376 spdk_app_start is called in Round 3. 00:05:28.376 Shutdown signal received, stop current app iteration 00:05:28.376 05:30:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.376 05:30:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.376 00:05:28.376 real 0m16.294s 00:05:28.376 user 0m34.453s 00:05:28.376 sys 0m3.160s 00:05:28.376 05:30:18 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:28.376 05:30:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.376 ************************************ 00:05:28.376 END TEST app_repeat 00:05:28.376 ************************************ 00:05:28.376 05:30:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.376 05:30:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:28.376 05:30:18 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:28.376 05:30:18 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.376 05:30:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.376 ************************************ 00:05:28.376 START TEST cpu_locks 00:05:28.376 ************************************ 00:05:28.376 05:30:18 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:28.636 * Looking for test storage... 00:05:28.636 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:28.636 05:30:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.636 05:30:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.636 05:30:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.636 05:30:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.636 05:30:18 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:28.636 05:30:18 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.636 05:30:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.636 ************************************ 00:05:28.636 START TEST default_locks 00:05:28.636 ************************************ 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3254824 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3254824 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 3254824 ']' 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:28.636 05:30:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.636 [2024-05-15 05:30:18.532776] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:28.636 [2024-05-15 05:30:18.532852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254824 ] 00:05:28.636 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.636 [2024-05-15 05:30:18.603915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.895 [2024-05-15 05:30:18.683993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.464 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:29.464 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:29.464 05:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3254824 00:05:29.464 05:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3254824 00:05:29.464 05:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.722 lslocks: write error 00:05:29.722 05:30:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3254824 00:05:29.723 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 3254824 ']' 00:05:29.723 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 3254824 00:05:29.723 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:29.723 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:29.723 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3254824 00:05:29.982 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:29.982 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:29.982 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3254824' 00:05:29.982 killing process with pid 3254824 00:05:29.982 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 3254824 00:05:29.982 05:30:19 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 3254824 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3254824 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3254824 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3254824 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 3254824 ']' 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.242 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (3254824) - No such process 00:05:30.242 ERROR: process (pid: 3254824) is no longer running 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.242 00:05:30.242 real 0m1.597s 00:05:30.242 user 0m1.664s 00:05:30.242 sys 0m0.555s 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:30.242 05:30:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.242 ************************************ 00:05:30.242 END TEST default_locks 00:05:30.242 ************************************ 00:05:30.242 05:30:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.242 05:30:20 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:30.242 05:30:20 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:30.242 05:30:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.242 ************************************ 00:05:30.242 START TEST default_locks_via_rpc 00:05:30.242 ************************************ 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3255142 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3255142 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3255142 ']' 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:30.242 05:30:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.243 05:30:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.243 [2024-05-15 05:30:20.211872] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:30.243 [2024-05-15 05:30:20.211953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255142 ] 00:05:30.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.502 [2024-05-15 05:30:20.282776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.502 [2024-05-15 05:30:20.361668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3255142 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3255142 00:05:31.069 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3255142 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 3255142 ']' 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 3255142 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3255142 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3255142' 00:05:31.638 killing process with pid 3255142 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 3255142 00:05:31.638 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 3255142 00:05:31.897 00:05:31.897 real 0m1.600s 00:05:31.897 user 0m1.686s 00:05:31.897 sys 0m0.545s 00:05:31.897 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:31.897 05:30:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.897 ************************************ 00:05:31.897 END TEST default_locks_via_rpc 00:05:31.897 ************************************ 00:05:31.897 05:30:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.897 05:30:21 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:31.897 05:30:21 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:31.897 05:30:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.897 ************************************ 00:05:31.897 START TEST non_locking_app_on_locked_coremask 00:05:31.897 ************************************ 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3255519 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3255519 /var/tmp/spdk.sock 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3255519 ']' 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.897 05:30:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.897 [2024-05-15 05:30:21.893486] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:31.897 [2024-05-15 05:30:21.893562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255519 ] 00:05:32.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.156 [2024-05-15 05:30:21.962019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.156 [2024-05-15 05:30:22.040666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3255683 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3255683 /var/tmp/spdk2.sock 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3255683 ']' 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:32.745 05:30:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.745 [2024-05-15 05:30:22.694596] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:32.745 [2024-05-15 05:30:22.694643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255683 ] 00:05:32.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.004 [2024-05-15 05:30:22.785514] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.004 [2024-05-15 05:30:22.785536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.004 [2024-05-15 05:30:22.929414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.572 05:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:33.572 05:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:33.572 05:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3255519 00:05:33.572 05:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.572 05:30:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3255519 00:05:35.020 lslocks: write error 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3255519 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3255519 ']' 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 3255519 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3255519 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3255519' 00:05:35.020 killing process with pid 3255519 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 3255519 00:05:35.020 05:30:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 3255519 00:05:35.618 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3255683 00:05:35.618 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3255683 ']' 00:05:35.618 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 3255683 00:05:35.618 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:35.618 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:35.619 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3255683 00:05:35.619 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:35.619 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:35.619 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3255683' 00:05:35.619 killing process with pid 3255683 00:05:35.619 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 3255683 00:05:35.619 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 3255683 00:05:35.878 00:05:35.878 real 0m3.901s 00:05:35.878 user 0m4.125s 00:05:35.878 sys 0m1.278s 00:05:35.878 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.878 05:30:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.879 ************************************ 00:05:35.879 END TEST non_locking_app_on_locked_coremask 00:05:35.879 ************************************ 00:05:35.879 05:30:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:35.879 05:30:25 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:35.879 05:30:25 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.879 05:30:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.879 ************************************ 00:05:35.879 START TEST locking_app_on_unlocked_coremask 00:05:35.879 ************************************ 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3256253 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3256253 /var/tmp/spdk.sock 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3256253 ']' 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:35.879 05:30:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.879 [2024-05-15 05:30:25.880026] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:35.879 [2024-05-15 05:30:25.880114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256253 ] 00:05:36.138 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.138 [2024-05-15 05:30:25.950744] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.138 [2024-05-15 05:30:25.950773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.138 [2024-05-15 05:30:26.026126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3256462 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3256462 /var/tmp/spdk2.sock 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3256462 ']' 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:36.707 05:30:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.707 [2024-05-15 05:30:26.720968] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:36.707 [2024-05-15 05:30:26.721056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256462 ] 00:05:36.967 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.967 [2024-05-15 05:30:26.815095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.967 [2024-05-15 05:30:26.959528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.536 05:30:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:37.536 05:30:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:37.536 05:30:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3256462 00:05:37.536 05:30:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3256462 00:05:37.536 05:30:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.472 lslocks: write error 00:05:38.472 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3256253 00:05:38.472 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3256253 ']' 00:05:38.472 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 3256253 00:05:38.472 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:38.472 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:38.472 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3256253 00:05:38.732 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:38.732 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:38.732 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3256253' 00:05:38.732 killing process with pid 3256253 00:05:38.732 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 3256253 00:05:38.732 05:30:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 3256253 00:05:39.299 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3256462 00:05:39.299 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3256462 ']' 00:05:39.299 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 3256462 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3256462 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3256462' 00:05:39.300 killing process with pid 3256462 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 3256462 00:05:39.300 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 3256462 00:05:39.558 00:05:39.558 real 0m3.611s 00:05:39.558 user 0m3.849s 00:05:39.558 sys 0m1.143s 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.558 ************************************ 00:05:39.558 END TEST locking_app_on_unlocked_coremask 00:05:39.558 ************************************ 00:05:39.558 05:30:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.558 05:30:29 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:39.558 05:30:29 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:39.558 05:30:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.558 ************************************ 00:05:39.558 START TEST locking_app_on_locked_coremask 00:05:39.558 ************************************ 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3256906 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3256906 /var/tmp/spdk.sock 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3256906 ']' 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.558 05:30:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.558 [2024-05-15 05:30:29.567598] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:39.558 [2024-05-15 05:30:29.567655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256906 ] 00:05:39.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.817 [2024-05-15 05:30:29.636350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.817 [2024-05-15 05:30:29.715046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3257095 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3257095 /var/tmp/spdk2.sock 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3257095 /var/tmp/spdk2.sock 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3257095 /var/tmp/spdk2.sock 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3257095 ']' 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:40.384 05:30:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.384 [2024-05-15 05:30:30.395012] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:40.384 [2024-05-15 05:30:30.395072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257095 ] 00:05:40.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.643 [2024-05-15 05:30:30.482056] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3256906 has claimed it. 00:05:40.643 [2024-05-15 05:30:30.482086] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.210 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (3257095) - No such process 00:05:41.210 ERROR: process (pid: 3257095) is no longer running 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3256906 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3256906 00:05:41.210 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.470 lslocks: write error 00:05:41.470 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3256906 00:05:41.470 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3256906 ']' 00:05:41.470 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 3256906 00:05:41.470 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:41.470 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:41.470 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3256906 00:05:41.729 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:41.729 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:41.729 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3256906' 00:05:41.729 killing process with pid 3256906 00:05:41.729 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 3256906 00:05:41.729 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 3256906 00:05:41.988 00:05:41.988 real 0m2.278s 00:05:41.988 user 0m2.475s 00:05:41.988 sys 0m0.656s 00:05:41.988 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.988 05:30:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 ************************************ 00:05:41.988 END TEST locking_app_on_locked_coremask 00:05:41.988 ************************************ 00:05:41.988 05:30:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.988 05:30:31 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:41.988 05:30:31 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.988 05:30:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 ************************************ 00:05:41.988 START TEST locking_overlapped_coremask 00:05:41.988 ************************************ 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3257397 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3257397 /var/tmp/spdk.sock 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 3257397 ']' 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:41.988 05:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 [2024-05-15 05:30:31.935733] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:41.988 [2024-05-15 05:30:31.935816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257397 ] 00:05:41.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.988 [2024-05-15 05:30:32.004354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.248 [2024-05-15 05:30:32.074046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.248 [2024-05-15 05:30:32.074143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.248 [2024-05-15 05:30:32.074143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3257541 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3257541 /var/tmp/spdk2.sock 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3257541 /var/tmp/spdk2.sock 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3257541 /var/tmp/spdk2.sock 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 3257541 ']' 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:42.816 05:30:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.816 [2024-05-15 05:30:32.775791] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:42.816 [2024-05-15 05:30:32.775860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257541 ] 00:05:42.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.074 [2024-05-15 05:30:32.870413] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3257397 has claimed it. 00:05:43.074 [2024-05-15 05:30:32.870451] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.652 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (3257541) - No such process 00:05:43.652 ERROR: process (pid: 3257541) is no longer running 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3257397 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 3257397 ']' 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 3257397 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3257397 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3257397' 00:05:43.652 killing process with pid 3257397 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 3257397 00:05:43.652 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 3257397 00:05:43.912 00:05:43.912 real 0m1.881s 00:05:43.912 user 0m5.309s 00:05:43.912 sys 0m0.454s 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.912 ************************************ 00:05:43.912 END TEST locking_overlapped_coremask 00:05:43.912 ************************************ 00:05:43.912 05:30:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.912 05:30:33 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:43.912 05:30:33 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:43.912 05:30:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.912 ************************************ 00:05:43.912 START TEST locking_overlapped_coremask_via_rpc 00:05:43.912 ************************************ 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3257704 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3257704 /var/tmp/spdk.sock 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3257704 ']' 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:43.912 05:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.912 [2024-05-15 05:30:33.890524] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:43.912 [2024-05-15 05:30:33.890567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257704 ] 00:05:43.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.171 [2024-05-15 05:30:33.956424] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.171 [2024-05-15 05:30:33.956447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.171 [2024-05-15 05:30:34.038616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.171 [2024-05-15 05:30:34.038710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.171 [2024-05-15 05:30:34.038712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3257962 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3257962 /var/tmp/spdk2.sock 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3257962 ']' 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:44.738 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.739 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:44.739 05:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.997 [2024-05-15 05:30:34.760131] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:44.997 [2024-05-15 05:30:34.760194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257962 ] 00:05:44.997 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.997 [2024-05-15 05:30:34.854729] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.997 [2024-05-15 05:30:34.854755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.997 [2024-05-15 05:30:35.005977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.997 [2024-05-15 05:30:35.006092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.997 [2024-05-15 05:30:35.006093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:45.565 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.826 [2024-05-15 05:30:35.589447] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3257704 has claimed it. 00:05:45.826 request: 00:05:45.826 { 00:05:45.826 "method": "framework_enable_cpumask_locks", 00:05:45.826 "req_id": 1 00:05:45.826 } 00:05:45.826 Got JSON-RPC error response 00:05:45.826 response: 00:05:45.826 { 00:05:45.826 "code": -32603, 00:05:45.826 "message": "Failed to claim CPU core: 2" 00:05:45.826 } 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3257704 /var/tmp/spdk.sock 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3257704 ']' 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3257962 /var/tmp/spdk2.sock 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3257962 ']' 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:45.826 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.086 00:05:46.086 real 0m2.099s 00:05:46.086 user 0m0.818s 00:05:46.086 sys 0m0.211s 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:46.086 05:30:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.086 ************************************ 00:05:46.086 END TEST locking_overlapped_coremask_via_rpc 00:05:46.086 ************************************ 00:05:46.086 05:30:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.086 05:30:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3257704 ]] 00:05:46.086 05:30:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3257704 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3257704 ']' 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3257704 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3257704 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3257704' 00:05:46.086 killing process with pid 3257704 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 3257704 00:05:46.086 05:30:36 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 3257704 00:05:46.655 05:30:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3257962 ]] 00:05:46.655 05:30:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3257962 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3257962 ']' 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3257962 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3257962 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3257962' 00:05:46.655 killing process with pid 3257962 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 3257962 00:05:46.655 05:30:36 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 3257962 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3257704 ]] 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3257704 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3257704 ']' 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3257704 00:05:46.915 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3257704) - No such process 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 3257704 is not found' 00:05:46.915 Process with pid 3257704 is not found 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3257962 ]] 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3257962 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3257962 ']' 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3257962 00:05:46.915 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3257962) - No such process 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 3257962 is not found' 00:05:46.915 Process with pid 3257962 is not found 00:05:46.915 05:30:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.915 00:05:46.915 real 0m18.406s 00:05:46.915 user 0m30.532s 00:05:46.915 sys 0m5.900s 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:46.915 05:30:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 ************************************ 00:05:46.915 END TEST cpu_locks 00:05:46.915 ************************************ 00:05:46.915 00:05:46.915 real 0m44.745s 00:05:46.915 user 1m24.549s 00:05:46.915 sys 0m10.192s 00:05:46.915 05:30:36 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:46.915 05:30:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 ************************************ 00:05:46.915 END TEST event 00:05:46.915 ************************************ 00:05:46.915 05:30:36 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:46.915 05:30:36 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:46.915 05:30:36 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:46.915 05:30:36 -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 ************************************ 00:05:46.915 START TEST thread 00:05:46.915 ************************************ 00:05:46.915 05:30:36 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:47.174 * Looking for test storage... 00:05:47.174 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:47.174 05:30:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.174 05:30:36 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:47.174 05:30:36 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:47.174 05:30:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.174 ************************************ 00:05:47.174 START TEST thread_poller_perf 00:05:47.174 ************************************ 00:05:47.174 05:30:37 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.174 [2024-05-15 05:30:37.023265] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:47.174 [2024-05-15 05:30:37.023345] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258345 ] 00:05:47.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.174 [2024-05-15 05:30:37.094974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.174 [2024-05-15 05:30:37.166058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.174 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:48.553 ====================================== 00:05:48.553 busy:2505678324 (cyc) 00:05:48.553 total_run_count: 871000 00:05:48.553 tsc_hz: 2500000000 (cyc) 00:05:48.553 ====================================== 00:05:48.553 poller_cost: 2876 (cyc), 1150 (nsec) 00:05:48.553 00:05:48.553 real 0m1.228s 00:05:48.553 user 0m1.136s 00:05:48.553 sys 0m0.088s 00:05:48.553 05:30:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:48.553 05:30:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 ************************************ 00:05:48.553 END TEST thread_poller_perf 00:05:48.553 ************************************ 00:05:48.553 05:30:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.553 05:30:38 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:05:48.553 05:30:38 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:48.553 05:30:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 ************************************ 00:05:48.553 START TEST thread_poller_perf 00:05:48.553 ************************************ 00:05:48.553 05:30:38 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.553 [2024-05-15 05:30:38.337403] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:48.553 [2024-05-15 05:30:38.337529] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258627 ] 00:05:48.553 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.553 [2024-05-15 05:30:38.408741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.553 [2024-05-15 05:30:38.480900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.553 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.932 ====================================== 00:05:49.932 busy:2501450456 (cyc) 00:05:49.932 total_run_count: 13941000 00:05:49.932 tsc_hz: 2500000000 (cyc) 00:05:49.932 ====================================== 00:05:49.932 poller_cost: 179 (cyc), 71 (nsec) 00:05:49.932 00:05:49.932 real 0m1.227s 00:05:49.932 user 0m1.133s 00:05:49.932 sys 0m0.089s 00:05:49.932 05:30:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:49.932 05:30:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.932 ************************************ 00:05:49.932 END TEST thread_poller_perf 00:05:49.932 ************************************ 00:05:49.932 05:30:39 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:49.932 05:30:39 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:49.932 05:30:39 thread -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:49.932 05:30:39 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:49.932 05:30:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.932 ************************************ 00:05:49.932 START TEST thread_spdk_lock 00:05:49.932 ************************************ 00:05:49.932 05:30:39 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:49.932 [2024-05-15 05:30:39.645588] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:49.932 [2024-05-15 05:30:39.645667] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258912 ] 00:05:49.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.932 [2024-05-15 05:30:39.717169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.932 [2024-05-15 05:30:39.788329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.932 [2024-05-15 05:30:39.788333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.501 [2024-05-15 05:30:40.278713] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.501 [2024-05-15 05:30:40.278756] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:50.502 [2024-05-15 05:30:40.278767] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14b75c0 00:05:50.502 [2024-05-15 05:30:40.279705] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.502 [2024-05-15 05:30:40.279811] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.502 [2024-05-15 05:30:40.279831] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:50.502 Starting test contend 00:05:50.502 Worker Delay Wait us Hold us Total us 00:05:50.502 0 3 168893 185523 354417 00:05:50.502 1 5 85366 285912 371278 00:05:50.502 PASS test contend 00:05:50.502 Starting test hold_by_poller 00:05:50.502 PASS test hold_by_poller 00:05:50.502 Starting test hold_by_message 00:05:50.502 PASS test hold_by_message 00:05:50.502 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:50.502 100014 assertions passed 00:05:50.502 0 assertions failed 00:05:50.502 00:05:50.502 real 0m0.717s 00:05:50.502 user 0m1.123s 00:05:50.502 sys 0m0.082s 00:05:50.502 05:30:40 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:50.502 05:30:40 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:50.502 ************************************ 00:05:50.502 END TEST thread_spdk_lock 00:05:50.502 ************************************ 00:05:50.502 00:05:50.502 real 0m3.504s 00:05:50.502 user 0m3.489s 00:05:50.502 sys 0m0.498s 00:05:50.502 05:30:40 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:50.502 05:30:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.502 ************************************ 00:05:50.502 END TEST thread 00:05:50.502 ************************************ 00:05:50.502 05:30:40 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:50.502 05:30:40 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:50.502 05:30:40 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:50.502 05:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:50.502 ************************************ 00:05:50.502 START TEST accel 00:05:50.502 ************************************ 00:05:50.502 05:30:40 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:50.762 * Looking for test storage... 00:05:50.762 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:50.762 05:30:40 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:50.762 05:30:40 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:50.762 05:30:40 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.762 05:30:40 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3259181 00:05:50.762 05:30:40 accel -- accel/accel.sh@63 -- # waitforlisten 3259181 00:05:50.762 05:30:40 accel -- common/autotest_common.sh@828 -- # '[' -z 3259181 ']' 00:05:50.762 05:30:40 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.762 05:30:40 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:50.762 05:30:40 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:50.762 05:30:40 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.762 05:30:40 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:50.762 05:30:40 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:50.762 05:30:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.762 05:30:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.762 05:30:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.762 05:30:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.762 05:30:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.762 05:30:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.762 05:30:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:50.762 05:30:40 accel -- accel/accel.sh@41 -- # jq -r . 00:05:50.762 [2024-05-15 05:30:40.609189] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:50.762 [2024-05-15 05:30:40.609256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259181 ] 00:05:50.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.762 [2024-05-15 05:30:40.679898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.762 [2024-05-15 05:30:40.751146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.700 05:30:41 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:51.700 05:30:41 accel -- common/autotest_common.sh@861 -- # return 0 00:05:51.700 05:30:41 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:51.700 05:30:41 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:51.700 05:30:41 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:51.700 05:30:41 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:51.700 05:30:41 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:51.700 05:30:41 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:51.700 05:30:41 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:51.700 05:30:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.700 05:30:41 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:51.700 05:30:41 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:51.700 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # IFS== 00:05:51.701 05:30:41 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:51.701 05:30:41 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:51.701 05:30:41 accel -- accel/accel.sh@75 -- # killprocess 3259181 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@947 -- # '[' -z 3259181 ']' 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@951 -- # kill -0 3259181 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@952 -- # uname 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3259181 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3259181' 00:05:51.701 killing process with pid 3259181 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@966 -- # kill 3259181 00:05:51.701 05:30:41 accel -- common/autotest_common.sh@971 -- # wait 3259181 00:05:51.961 05:30:41 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:51.961 05:30:41 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:51.961 05:30:41 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:05:51.961 05:30:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.961 05:30:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.961 05:30:41 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:51.961 05:30:41 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:51.961 05:30:41 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.961 05:30:41 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:51.961 05:30:41 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:51.961 05:30:41 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:51.961 05:30:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.961 05:30:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.961 ************************************ 00:05:51.961 START TEST accel_missing_filename 00:05:51.961 ************************************ 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.961 05:30:41 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:52.221 05:30:41 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:52.221 [2024-05-15 05:30:42.000218] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:52.221 [2024-05-15 05:30:42.000301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259435 ] 00:05:52.221 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.221 [2024-05-15 05:30:42.071420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.221 [2024-05-15 05:30:42.143292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.221 [2024-05-15 05:30:42.183132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.481 [2024-05-15 05:30:42.243080] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:52.481 A filename is required. 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:52.481 00:05:52.481 real 0m0.334s 00:05:52.481 user 0m0.239s 00:05:52.481 sys 0m0.132s 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:52.481 05:30:42 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:52.481 ************************************ 00:05:52.481 END TEST accel_missing_filename 00:05:52.481 ************************************ 00:05:52.481 05:30:42 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.481 05:30:42 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:52.481 05:30:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:52.481 05:30:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.481 ************************************ 00:05:52.481 START TEST accel_compress_verify 00:05:52.481 ************************************ 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:52.481 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:52.481 05:30:42 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:52.481 [2024-05-15 05:30:42.424393] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:52.481 [2024-05-15 05:30:42.424479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259563 ] 00:05:52.481 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.481 [2024-05-15 05:30:42.496269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.741 [2024-05-15 05:30:42.567311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.741 [2024-05-15 05:30:42.606912] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.741 [2024-05-15 05:30:42.666628] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:52.741 00:05:52.741 Compression does not support the verify option, aborting. 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:52.741 00:05:52.741 real 0m0.333s 00:05:52.741 user 0m0.239s 00:05:52.741 sys 0m0.134s 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:52.741 05:30:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:52.741 ************************************ 00:05:52.741 END TEST accel_compress_verify 00:05:52.741 ************************************ 00:05:53.001 05:30:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:53.001 05:30:42 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:53.001 05:30:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.001 05:30:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.001 ************************************ 00:05:53.001 START TEST accel_wrong_workload 00:05:53.001 ************************************ 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:53.001 05:30:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:53.001 Unsupported workload type: foobar 00:05:53.001 [2024-05-15 05:30:42.830257] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:53.001 accel_perf options: 00:05:53.001 [-h help message] 00:05:53.001 [-q queue depth per core] 00:05:53.001 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.001 [-T number of threads per core 00:05:53.001 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.001 [-t time in seconds] 00:05:53.001 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.001 [ dif_verify, , dif_generate, dif_generate_copy 00:05:53.001 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.001 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.001 [-S for crc32c workload, use this seed value (default 0) 00:05:53.001 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.001 [-f for fill workload, use this BYTE value (default 255) 00:05:53.001 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.001 [-y verify result if this switch is on] 00:05:53.001 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.001 Can be used to spread operations across a wider range of memory. 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:53.001 00:05:53.001 real 0m0.016s 00:05:53.001 user 0m0.008s 00:05:53.001 sys 0m0.008s 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.001 05:30:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:53.001 ************************************ 00:05:53.001 END TEST accel_wrong_workload 00:05:53.001 ************************************ 00:05:53.002 Error: writing output failed: Broken pipe 00:05:53.002 05:30:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.002 05:30:42 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:05:53.002 05:30:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.002 05:30:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.002 ************************************ 00:05:53.002 START TEST accel_negative_buffers 00:05:53.002 ************************************ 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:53.002 05:30:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:53.002 -x option must be non-negative. 00:05:53.002 [2024-05-15 05:30:42.929045] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:53.002 accel_perf options: 00:05:53.002 [-h help message] 00:05:53.002 [-q queue depth per core] 00:05:53.002 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.002 [-T number of threads per core 00:05:53.002 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.002 [-t time in seconds] 00:05:53.002 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.002 [ dif_verify, , dif_generate, dif_generate_copy 00:05:53.002 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.002 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.002 [-S for crc32c workload, use this seed value (default 0) 00:05:53.002 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.002 [-f for fill workload, use this BYTE value (default 255) 00:05:53.002 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.002 [-y verify result if this switch is on] 00:05:53.002 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.002 Can be used to spread operations across a wider range of memory. 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:53.002 00:05:53.002 real 0m0.025s 00:05:53.002 user 0m0.008s 00:05:53.002 sys 0m0.017s 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.002 05:30:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:53.002 ************************************ 00:05:53.002 END TEST accel_negative_buffers 00:05:53.002 ************************************ 00:05:53.002 Error: writing output failed: Broken pipe 00:05:53.002 05:30:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:53.002 05:30:42 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:53.002 05:30:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.002 05:30:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.002 ************************************ 00:05:53.002 START TEST accel_crc32c 00:05:53.002 ************************************ 00:05:53.002 05:30:43 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:53.002 05:30:43 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:53.262 [2024-05-15 05:30:43.034320] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:53.262 [2024-05-15 05:30:43.034400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259634 ] 00:05:53.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.262 [2024-05-15 05:30:43.107212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.262 [2024-05-15 05:30:43.178242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.262 05:30:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:54.642 05:30:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.642 00:05:54.642 real 0m1.334s 00:05:54.642 user 0m1.200s 00:05:54.642 sys 0m0.135s 00:05:54.642 05:30:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:54.642 05:30:44 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:54.642 ************************************ 00:05:54.642 END TEST accel_crc32c 00:05:54.642 ************************************ 00:05:54.642 05:30:44 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:54.642 05:30:44 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:05:54.642 05:30:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:54.642 05:30:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.642 ************************************ 00:05:54.642 START TEST accel_crc32c_C2 00:05:54.642 ************************************ 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:54.642 [2024-05-15 05:30:44.439292] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:54.642 [2024-05-15 05:30:44.439363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259917 ] 00:05:54.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.642 [2024-05-15 05:30:44.508315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.642 [2024-05-15 05:30:44.578720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.642 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.643 05:30:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.150 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.151 00:05:56.151 real 0m1.330s 00:05:56.151 user 0m1.209s 00:05:56.151 sys 0m0.122s 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:56.151 05:30:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:56.151 ************************************ 00:05:56.151 END TEST accel_crc32c_C2 00:05:56.151 ************************************ 00:05:56.151 05:30:45 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:56.151 05:30:45 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:56.151 05:30:45 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:56.151 05:30:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.151 ************************************ 00:05:56.151 START TEST accel_copy 00:05:56.151 ************************************ 00:05:56.151 05:30:45 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:56.151 05:30:45 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:56.151 [2024-05-15 05:30:45.844034] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:56.151 [2024-05-15 05:30:45.844120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260202 ] 00:05:56.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.151 [2024-05-15 05:30:45.913002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.151 [2024-05-15 05:30:45.983103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.151 05:30:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:57.531 05:30:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.531 00:05:57.531 real 0m1.328s 00:05:57.531 user 0m1.199s 00:05:57.531 sys 0m0.130s 00:05:57.531 05:30:47 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:57.531 05:30:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:57.531 ************************************ 00:05:57.531 END TEST accel_copy 00:05:57.531 ************************************ 00:05:57.531 05:30:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.531 05:30:47 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:05:57.531 05:30:47 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:57.531 05:30:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.531 ************************************ 00:05:57.531 START TEST accel_fill 00:05:57.531 ************************************ 00:05:57.531 05:30:47 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:57.531 [2024-05-15 05:30:47.253007] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:57.531 [2024-05-15 05:30:47.253082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260487 ] 00:05:57.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.531 [2024-05-15 05:30:47.323435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.531 [2024-05-15 05:30:47.394027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:57.531 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:57.532 05:30:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:58.916 05:30:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.916 00:05:58.916 real 0m1.331s 00:05:58.916 user 0m1.211s 00:05:58.916 sys 0m0.121s 00:05:58.916 05:30:48 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:58.916 05:30:48 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:58.916 ************************************ 00:05:58.916 END TEST accel_fill 00:05:58.916 ************************************ 00:05:58.916 05:30:48 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:58.916 05:30:48 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:05:58.916 05:30:48 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:58.916 05:30:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.916 ************************************ 00:05:58.916 START TEST accel_copy_crc32c 00:05:58.916 ************************************ 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:58.916 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:58.916 [2024-05-15 05:30:48.662989] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:58.916 [2024-05-15 05:30:48.663071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260768 ] 00:05:58.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.916 [2024-05-15 05:30:48.733081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.917 [2024-05-15 05:30:48.803309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.917 05:30:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.297 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.298 00:06:00.298 real 0m1.329s 00:06:00.298 user 0m1.201s 00:06:00.298 sys 0m0.130s 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:00.298 05:30:49 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:00.298 ************************************ 00:06:00.298 END TEST accel_copy_crc32c 00:06:00.298 ************************************ 00:06:00.298 05:30:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.298 05:30:50 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:00.298 05:30:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:00.298 05:30:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.298 ************************************ 00:06:00.298 START TEST accel_copy_crc32c_C2 00:06:00.298 ************************************ 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:00.298 [2024-05-15 05:30:50.078941] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:00.298 [2024-05-15 05:30:50.079017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261052 ] 00:06:00.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.298 [2024-05-15 05:30:50.151592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.298 [2024-05-15 05:30:50.226179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.298 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:00.299 05:30:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.679 00:06:01.679 real 0m1.340s 00:06:01.679 user 0m1.207s 00:06:01.679 sys 0m0.136s 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.679 05:30:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:01.679 ************************************ 00:06:01.679 END TEST accel_copy_crc32c_C2 00:06:01.679 ************************************ 00:06:01.679 05:30:51 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:01.679 05:30:51 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:01.679 05:30:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.679 05:30:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.679 ************************************ 00:06:01.679 START TEST accel_dualcast 00:06:01.679 ************************************ 00:06:01.680 05:30:51 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:01.680 [2024-05-15 05:30:51.499216] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:01.680 [2024-05-15 05:30:51.499293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261339 ] 00:06:01.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.680 [2024-05-15 05:30:51.569574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.680 [2024-05-15 05:30:51.639606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.680 05:30:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:03.060 05:30:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.060 00:06:03.060 real 0m1.331s 00:06:03.060 user 0m1.200s 00:06:03.060 sys 0m0.133s 00:06:03.060 05:30:52 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.060 05:30:52 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:03.060 ************************************ 00:06:03.060 END TEST accel_dualcast 00:06:03.060 ************************************ 00:06:03.060 05:30:52 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:03.060 05:30:52 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:03.060 05:30:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.060 05:30:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.060 ************************************ 00:06:03.060 START TEST accel_compare 00:06:03.060 ************************************ 00:06:03.060 05:30:52 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.060 05:30:52 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.061 05:30:52 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.061 05:30:52 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:03.061 05:30:52 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:03.061 [2024-05-15 05:30:52.911321] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:03.061 [2024-05-15 05:30:52.911400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261576 ] 00:06:03.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.061 [2024-05-15 05:30:52.982663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.061 [2024-05-15 05:30:53.059362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:03.327 05:30:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.264 05:30:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.264 05:30:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.264 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.264 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.264 05:30:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.264 05:30:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:04.265 05:30:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.265 00:06:04.265 real 0m1.340s 00:06:04.265 user 0m1.209s 00:06:04.265 sys 0m0.134s 00:06:04.265 05:30:54 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:04.265 05:30:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:04.265 ************************************ 00:06:04.265 END TEST accel_compare 00:06:04.265 ************************************ 00:06:04.265 05:30:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:04.265 05:30:54 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:04.265 05:30:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:04.265 05:30:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.525 ************************************ 00:06:04.525 START TEST accel_xor 00:06:04.525 ************************************ 00:06:04.525 05:30:54 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:04.525 [2024-05-15 05:30:54.326402] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:04.525 [2024-05-15 05:30:54.326482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261804 ] 00:06:04.525 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.525 [2024-05-15 05:30:54.396062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.525 [2024-05-15 05:30:54.469734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:04.525 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.526 05:30:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.905 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.906 00:06:05.906 real 0m1.332s 00:06:05.906 user 0m1.204s 00:06:05.906 sys 0m0.130s 00:06:05.906 05:30:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.906 05:30:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:05.906 ************************************ 00:06:05.906 END TEST accel_xor 00:06:05.906 ************************************ 00:06:05.906 05:30:55 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:05.906 05:30:55 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:05.906 05:30:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.906 05:30:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.906 ************************************ 00:06:05.906 START TEST accel_xor 00:06:05.906 ************************************ 00:06:05.906 05:30:55 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:05.906 [2024-05-15 05:30:55.741873] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:05.906 [2024-05-15 05:30:55.741953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262030 ] 00:06:05.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.906 [2024-05-15 05:30:55.811648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.906 [2024-05-15 05:30:55.884679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.906 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:06.165 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.166 05:30:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.104 05:30:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.104 05:30:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.104 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.104 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.104 05:30:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:07.105 05:30:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.105 00:06:07.105 real 0m1.331s 00:06:07.105 user 0m1.208s 00:06:07.105 sys 0m0.126s 00:06:07.105 05:30:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.105 05:30:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:07.105 ************************************ 00:06:07.105 END TEST accel_xor 00:06:07.105 ************************************ 00:06:07.105 05:30:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:07.105 05:30:57 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:07.105 05:30:57 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.105 05:30:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.365 ************************************ 00:06:07.365 START TEST accel_dif_verify 00:06:07.365 ************************************ 00:06:07.365 05:30:57 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:07.365 [2024-05-15 05:30:57.158841] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:07.365 [2024-05-15 05:30:57.158922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262247 ] 00:06:07.365 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.365 [2024-05-15 05:30:57.231034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.365 [2024-05-15 05:30:57.304847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:07.365 05:30:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:08.745 05:30:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.745 00:06:08.745 real 0m1.338s 00:06:08.745 user 0m1.205s 00:06:08.745 sys 0m0.136s 00:06:08.745 05:30:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:08.745 05:30:58 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:08.745 ************************************ 00:06:08.745 END TEST accel_dif_verify 00:06:08.745 ************************************ 00:06:08.745 05:30:58 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:08.745 05:30:58 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:08.745 05:30:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:08.745 05:30:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.745 ************************************ 00:06:08.745 START TEST accel_dif_generate 00:06:08.745 ************************************ 00:06:08.745 05:30:58 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.745 05:30:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:08.746 [2024-05-15 05:30:58.582018] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:08.746 [2024-05-15 05:30:58.582090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262517 ] 00:06:08.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.746 [2024-05-15 05:30:58.654917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.746 [2024-05-15 05:30:58.724679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.746 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.005 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.006 05:30:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:09.952 05:30:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.952 00:06:09.952 real 0m1.334s 00:06:09.952 user 0m1.208s 00:06:09.952 sys 0m0.129s 00:06:09.952 05:30:59 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:09.952 05:30:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:09.952 ************************************ 00:06:09.952 END TEST accel_dif_generate 00:06:09.952 ************************************ 00:06:09.952 05:30:59 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:09.952 05:30:59 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:09.952 05:30:59 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:09.952 05:30:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.211 ************************************ 00:06:10.211 START TEST accel_dif_generate_copy 00:06:10.211 ************************************ 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.211 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:10.212 05:30:59 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:10.212 [2024-05-15 05:30:59.995475] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:10.212 [2024-05-15 05:30:59.995557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262799 ] 00:06:10.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.212 [2024-05-15 05:31:00.068315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.212 [2024-05-15 05:31:00.147865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:10.212 05:31:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.591 00:06:11.591 real 0m1.343s 00:06:11.591 user 0m1.211s 00:06:11.591 sys 0m0.134s 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:11.591 05:31:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:11.591 ************************************ 00:06:11.591 END TEST accel_dif_generate_copy 00:06:11.591 ************************************ 00:06:11.591 05:31:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:11.591 05:31:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.591 05:31:01 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:11.591 05:31:01 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:11.591 05:31:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.591 ************************************ 00:06:11.591 START TEST accel_comp 00:06:11.591 ************************************ 00:06:11.591 05:31:01 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:11.591 [2024-05-15 05:31:01.417089] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:11.591 [2024-05-15 05:31:01.417175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263083 ] 00:06:11.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.591 [2024-05-15 05:31:01.485917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.591 [2024-05-15 05:31:01.556897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:11.591 05:31:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:12.969 05:31:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.969 00:06:12.969 real 0m1.331s 00:06:12.969 user 0m1.209s 00:06:12.969 sys 0m0.125s 00:06:12.969 05:31:02 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.969 05:31:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:12.969 ************************************ 00:06:12.969 END TEST accel_comp 00:06:12.969 ************************************ 00:06:12.969 05:31:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:12.969 05:31:02 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:12.969 05:31:02 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:12.969 05:31:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.969 ************************************ 00:06:12.969 START TEST accel_decomp 00:06:12.969 ************************************ 00:06:12.969 05:31:02 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:12.969 05:31:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:12.969 [2024-05-15 05:31:02.835401] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:12.969 [2024-05-15 05:31:02.835487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263371 ] 00:06:12.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.969 [2024-05-15 05:31:02.907723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.969 [2024-05-15 05:31:02.977830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:13.228 05:31:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.164 05:31:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.165 05:31:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.165 00:06:14.165 real 0m1.336s 00:06:14.165 user 0m1.197s 00:06:14.165 sys 0m0.141s 00:06:14.165 05:31:04 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.165 05:31:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:14.165 ************************************ 00:06:14.165 END TEST accel_decomp 00:06:14.165 ************************************ 00:06:14.165 05:31:04 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.165 05:31:04 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:14.165 05:31:04 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.165 05:31:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.425 ************************************ 00:06:14.425 START TEST accel_decmop_full 00:06:14.425 ************************************ 00:06:14.425 05:31:04 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:14.425 [2024-05-15 05:31:04.251787] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:14.425 [2024-05-15 05:31:04.251867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263651 ] 00:06:14.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.425 [2024-05-15 05:31:04.323708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.425 [2024-05-15 05:31:04.393767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.425 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:14.426 05:31:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:15.803 05:31:05 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.803 00:06:15.803 real 0m1.341s 00:06:15.803 user 0m1.216s 00:06:15.803 sys 0m0.126s 00:06:15.803 05:31:05 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.803 05:31:05 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:15.803 ************************************ 00:06:15.803 END TEST accel_decmop_full 00:06:15.803 ************************************ 00:06:15.803 05:31:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.803 05:31:05 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:15.803 05:31:05 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.803 05:31:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.803 ************************************ 00:06:15.803 START TEST accel_decomp_mcore 00:06:15.803 ************************************ 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:15.803 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:15.803 [2024-05-15 05:31:05.669552] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:15.803 [2024-05-15 05:31:05.669632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263940 ] 00:06:15.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.803 [2024-05-15 05:31:05.738754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.803 [2024-05-15 05:31:05.811848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.803 [2024-05-15 05:31:05.811945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.803 [2024-05-15 05:31:05.812028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.803 [2024-05-15 05:31:05.812031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.063 05:31:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.001 00:06:17.001 real 0m1.350s 00:06:17.001 user 0m4.559s 00:06:17.001 sys 0m0.135s 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:17.001 05:31:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:17.001 ************************************ 00:06:17.001 END TEST accel_decomp_mcore 00:06:17.001 ************************************ 00:06:17.261 05:31:07 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.261 05:31:07 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:17.261 05:31:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:17.261 05:31:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.261 ************************************ 00:06:17.261 START TEST accel_decomp_full_mcore 00:06:17.261 ************************************ 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:17.261 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:17.261 [2024-05-15 05:31:07.109540] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:17.261 [2024-05-15 05:31:07.109619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264229 ] 00:06:17.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.261 [2024-05-15 05:31:07.178889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.261 [2024-05-15 05:31:07.253142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.261 [2024-05-15 05:31:07.253230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.261 [2024-05-15 05:31:07.253297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.261 [2024-05-15 05:31:07.253298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.521 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:17.522 05:31:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.498 00:06:18.498 real 0m1.360s 00:06:18.498 user 0m4.572s 00:06:18.498 sys 0m0.145s 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:18.498 05:31:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:18.498 ************************************ 00:06:18.498 END TEST accel_decomp_full_mcore 00:06:18.498 ************************************ 00:06:18.498 05:31:08 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.498 05:31:08 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:18.498 05:31:08 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:18.498 05:31:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.758 ************************************ 00:06:18.758 START TEST accel_decomp_mthread 00:06:18.758 ************************************ 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.758 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:18.759 [2024-05-15 05:31:08.557940] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:18.759 [2024-05-15 05:31:08.558023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264514 ] 00:06:18.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.759 [2024-05-15 05:31:08.628795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.759 [2024-05-15 05:31:08.698767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.759 05:31:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.137 00:06:20.137 real 0m1.343s 00:06:20.137 user 0m1.228s 00:06:20.137 sys 0m0.130s 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:20.137 05:31:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:20.137 ************************************ 00:06:20.137 END TEST accel_decomp_mthread 00:06:20.137 ************************************ 00:06:20.137 05:31:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:20.137 05:31:09 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:20.137 05:31:09 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:20.137 05:31:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.137 ************************************ 00:06:20.137 START TEST accel_decomp_full_mthread 00:06:20.137 ************************************ 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:20.137 05:31:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:20.137 [2024-05-15 05:31:09.990172] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:20.137 [2024-05-15 05:31:09.990245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264767 ] 00:06:20.137 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.137 [2024-05-15 05:31:10.065138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.137 [2024-05-15 05:31:10.145678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.397 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:20.398 05:31:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.335 00:06:21.335 real 0m1.381s 00:06:21.335 user 0m1.256s 00:06:21.335 sys 0m0.137s 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:21.335 05:31:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:21.335 ************************************ 00:06:21.335 END TEST accel_decomp_full_mthread 00:06:21.335 ************************************ 00:06:21.595 05:31:11 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:21.595 05:31:11 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:21.595 05:31:11 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:21.595 05:31:11 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:21.595 05:31:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.595 05:31:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:21.595 05:31:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.595 05:31:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.595 05:31:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.595 05:31:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.595 05:31:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.595 05:31:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:21.595 05:31:11 accel -- accel/accel.sh@41 -- # jq -r . 00:06:21.595 ************************************ 00:06:21.595 START TEST accel_dif_functional_tests 00:06:21.595 ************************************ 00:06:21.595 05:31:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:21.595 [2024-05-15 05:31:11.467735] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:21.595 [2024-05-15 05:31:11.467814] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265007 ] 00:06:21.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.595 [2024-05-15 05:31:11.541151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.595 [2024-05-15 05:31:11.615439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.595 [2024-05-15 05:31:11.615522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.595 [2024-05-15 05:31:11.615524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.854 00:06:21.854 00:06:21.854 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.854 http://cunit.sourceforge.net/ 00:06:21.854 00:06:21.854 00:06:21.854 Suite: accel_dif 00:06:21.854 Test: verify: DIF generated, GUARD check ...passed 00:06:21.854 Test: verify: DIF generated, APPTAG check ...passed 00:06:21.854 Test: verify: DIF generated, REFTAG check ...passed 00:06:21.854 Test: verify: DIF not generated, GUARD check ...[2024-05-15 05:31:11.683605] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:21.854 [2024-05-15 05:31:11.683654] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:21.854 passed 00:06:21.854 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 05:31:11.683687] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:21.854 [2024-05-15 05:31:11.683706] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:21.854 passed 00:06:21.854 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 05:31:11.683729] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:21.854 [2024-05-15 05:31:11.683749] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:21.854 passed 00:06:21.854 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:21.854 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 05:31:11.683809] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:21.854 passed 00:06:21.854 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:21.854 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:21.854 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:21.854 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 05:31:11.683908] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:21.854 passed 00:06:21.854 Test: generate copy: DIF generated, GUARD check ...passed 00:06:21.854 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:21.854 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:21.854 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:21.854 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:21.854 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:21.854 Test: generate copy: iovecs-len validate ...[2024-05-15 05:31:11.684079] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:21.854 passed 00:06:21.854 Test: generate copy: buffer alignment validate ...passed 00:06:21.854 00:06:21.854 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.854 suites 1 1 n/a 0 0 00:06:21.854 tests 20 20 20 0 0 00:06:21.854 asserts 204 204 204 0 n/a 00:06:21.854 00:06:21.854 Elapsed time = 0.000 seconds 00:06:21.854 00:06:21.854 real 0m0.402s 00:06:21.854 user 0m0.557s 00:06:21.854 sys 0m0.160s 00:06:21.854 05:31:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:21.854 05:31:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:21.854 ************************************ 00:06:21.854 END TEST accel_dif_functional_tests 00:06:21.854 ************************************ 00:06:22.112 00:06:22.112 real 0m31.415s 00:06:22.112 user 0m34.234s 00:06:22.112 sys 0m4.971s 00:06:22.112 05:31:11 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:22.112 05:31:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.112 ************************************ 00:06:22.112 END TEST accel 00:06:22.112 ************************************ 00:06:22.112 05:31:11 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:22.112 05:31:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:22.112 05:31:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:22.112 05:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.112 ************************************ 00:06:22.112 START TEST accel_rpc 00:06:22.112 ************************************ 00:06:22.112 05:31:11 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:22.112 * Looking for test storage... 00:06:22.112 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:22.112 05:31:12 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.112 05:31:12 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3265160 00:06:22.112 05:31:12 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:22.112 05:31:12 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3265160 00:06:22.112 05:31:12 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 3265160 ']' 00:06:22.112 05:31:12 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.112 05:31:12 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:22.112 05:31:12 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.112 05:31:12 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:22.112 05:31:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.112 [2024-05-15 05:31:12.086202] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:22.112 [2024-05-15 05:31:12.086281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265160 ] 00:06:22.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.370 [2024-05-15 05:31:12.157495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.370 [2024-05-15 05:31:12.235926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.939 05:31:12 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:22.939 05:31:12 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:22.939 05:31:12 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:22.939 05:31:12 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:22.939 05:31:12 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:22.939 05:31:12 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:22.939 05:31:12 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:22.939 05:31:12 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:22.939 05:31:12 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:22.939 05:31:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.939 ************************************ 00:06:22.939 START TEST accel_assign_opcode 00:06:22.939 ************************************ 00:06:22.939 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:22.939 05:31:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:22.939 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.939 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:22.939 [2024-05-15 05:31:12.958062] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.198 [2024-05-15 05:31:12.966069] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.198 05:31:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.198 software 00:06:23.198 00:06:23.198 real 0m0.243s 00:06:23.198 user 0m0.046s 00:06:23.198 sys 0m0.015s 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:23.198 05:31:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:23.198 ************************************ 00:06:23.198 END TEST accel_assign_opcode 00:06:23.198 ************************************ 00:06:23.457 05:31:13 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3265160 00:06:23.457 05:31:13 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 3265160 ']' 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 3265160 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3265160 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3265160' 00:06:23.458 killing process with pid 3265160 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@966 -- # kill 3265160 00:06:23.458 05:31:13 accel_rpc -- common/autotest_common.sh@971 -- # wait 3265160 00:06:23.717 00:06:23.717 real 0m1.623s 00:06:23.717 user 0m1.659s 00:06:23.717 sys 0m0.482s 00:06:23.717 05:31:13 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:23.717 05:31:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.717 ************************************ 00:06:23.717 END TEST accel_rpc 00:06:23.717 ************************************ 00:06:23.717 05:31:13 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:23.717 05:31:13 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:23.717 05:31:13 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:23.717 05:31:13 -- common/autotest_common.sh@10 -- # set +x 00:06:23.717 ************************************ 00:06:23.717 START TEST app_cmdline 00:06:23.717 ************************************ 00:06:23.717 05:31:13 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:23.976 * Looking for test storage... 00:06:23.976 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:23.976 05:31:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:23.976 05:31:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3265501 00:06:23.976 05:31:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3265501 00:06:23.976 05:31:13 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:23.976 05:31:13 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 3265501 ']' 00:06:23.976 05:31:13 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.976 05:31:13 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:23.976 05:31:13 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.976 05:31:13 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:23.976 05:31:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 [2024-05-15 05:31:13.773592] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:23.976 [2024-05-15 05:31:13.773650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265501 ] 00:06:23.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.976 [2024-05-15 05:31:13.840768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.976 [2024-05-15 05:31:13.919297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:24.913 { 00:06:24.913 "version": "SPDK v24.05-pre git sha1 4506c0c36", 00:06:24.913 "fields": { 00:06:24.913 "major": 24, 00:06:24.913 "minor": 5, 00:06:24.913 "patch": 0, 00:06:24.913 "suffix": "-pre", 00:06:24.913 "commit": "4506c0c36" 00:06:24.913 } 00:06:24.913 } 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.913 05:31:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:24.913 05:31:14 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.172 request: 00:06:25.172 { 00:06:25.173 "method": "env_dpdk_get_mem_stats", 00:06:25.173 "req_id": 1 00:06:25.173 } 00:06:25.173 Got JSON-RPC error response 00:06:25.173 response: 00:06:25.173 { 00:06:25.173 "code": -32601, 00:06:25.173 "message": "Method not found" 00:06:25.173 } 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:25.173 05:31:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3265501 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 3265501 ']' 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 3265501 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:25.173 05:31:14 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3265501 00:06:25.173 05:31:15 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:25.173 05:31:15 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:25.173 05:31:15 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3265501' 00:06:25.173 killing process with pid 3265501 00:06:25.173 05:31:15 app_cmdline -- common/autotest_common.sh@966 -- # kill 3265501 00:06:25.173 05:31:15 app_cmdline -- common/autotest_common.sh@971 -- # wait 3265501 00:06:25.432 00:06:25.432 real 0m1.663s 00:06:25.432 user 0m1.955s 00:06:25.432 sys 0m0.471s 00:06:25.432 05:31:15 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:25.432 05:31:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.432 ************************************ 00:06:25.432 END TEST app_cmdline 00:06:25.432 ************************************ 00:06:25.432 05:31:15 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:25.432 05:31:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:25.432 05:31:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:25.432 05:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.432 ************************************ 00:06:25.432 START TEST version 00:06:25.432 ************************************ 00:06:25.432 05:31:15 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:25.692 * Looking for test storage... 00:06:25.692 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:25.692 05:31:15 version -- app/version.sh@17 -- # get_header_version major 00:06:25.692 05:31:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.692 05:31:15 version -- app/version.sh@17 -- # major=24 00:06:25.692 05:31:15 version -- app/version.sh@18 -- # get_header_version minor 00:06:25.692 05:31:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.692 05:31:15 version -- app/version.sh@18 -- # minor=5 00:06:25.692 05:31:15 version -- app/version.sh@19 -- # get_header_version patch 00:06:25.692 05:31:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.692 05:31:15 version -- app/version.sh@19 -- # patch=0 00:06:25.692 05:31:15 version -- app/version.sh@20 -- # get_header_version suffix 00:06:25.692 05:31:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # cut -f2 00:06:25.692 05:31:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:25.692 05:31:15 version -- app/version.sh@20 -- # suffix=-pre 00:06:25.692 05:31:15 version -- app/version.sh@22 -- # version=24.5 00:06:25.692 05:31:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:25.692 05:31:15 version -- app/version.sh@28 -- # version=24.5rc0 00:06:25.692 05:31:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:25.692 05:31:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:25.692 05:31:15 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:25.692 05:31:15 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:25.692 00:06:25.692 real 0m0.187s 00:06:25.692 user 0m0.102s 00:06:25.692 sys 0m0.134s 00:06:25.692 05:31:15 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:25.692 05:31:15 version -- common/autotest_common.sh@10 -- # set +x 00:06:25.692 ************************************ 00:06:25.692 END TEST version 00:06:25.692 ************************************ 00:06:25.692 05:31:15 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@194 -- # uname -s 00:06:25.692 05:31:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:25.692 05:31:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.692 05:31:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:25.692 05:31:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:25.692 05:31:15 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:25.692 05:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.692 05:31:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:25.692 05:31:15 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:06:25.692 05:31:15 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:25.692 05:31:15 -- spdk/autotest.sh@367 -- # [[ 1 -eq 1 ]] 00:06:25.692 05:31:15 -- spdk/autotest.sh@368 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:25.693 05:31:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:25.693 05:31:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:25.693 05:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.952 ************************************ 00:06:25.952 START TEST llvm_fuzz 00:06:25.952 ************************************ 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:25.952 * Looking for test storage... 00:06:25.952 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@547 -- # fuzzers=() 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@547 -- # local fuzzers 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@549 -- # [[ -n '' ]] 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@556 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:25.952 05:31:15 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:25.952 05:31:15 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:25.952 ************************************ 00:06:25.952 START TEST nvmf_fuzz 00:06:25.952 ************************************ 00:06:25.952 05:31:15 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:26.214 * Looking for test storage... 00:06:26.214 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:26.214 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:26.215 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:26.215 #define SPDK_CONFIG_H 00:06:26.215 #define SPDK_CONFIG_APPS 1 00:06:26.215 #define SPDK_CONFIG_ARCH native 00:06:26.215 #undef SPDK_CONFIG_ASAN 00:06:26.215 #undef SPDK_CONFIG_AVAHI 00:06:26.215 #undef SPDK_CONFIG_CET 00:06:26.215 #define SPDK_CONFIG_COVERAGE 1 00:06:26.215 #define SPDK_CONFIG_CROSS_PREFIX 00:06:26.215 #undef SPDK_CONFIG_CRYPTO 00:06:26.215 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:26.215 #undef SPDK_CONFIG_CUSTOMOCF 00:06:26.215 #undef SPDK_CONFIG_DAOS 00:06:26.215 #define SPDK_CONFIG_DAOS_DIR 00:06:26.215 #define SPDK_CONFIG_DEBUG 1 00:06:26.215 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:26.215 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:26.215 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:26.215 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:26.215 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:26.215 #undef SPDK_CONFIG_DPDK_UADK 00:06:26.215 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:26.215 #define SPDK_CONFIG_EXAMPLES 1 00:06:26.215 #undef SPDK_CONFIG_FC 00:06:26.215 #define SPDK_CONFIG_FC_PATH 00:06:26.215 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:26.215 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:26.215 #undef SPDK_CONFIG_FUSE 00:06:26.215 #define SPDK_CONFIG_FUZZER 1 00:06:26.215 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:26.215 #undef SPDK_CONFIG_GOLANG 00:06:26.215 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:26.215 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:26.215 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:26.215 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:26.215 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:26.215 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:26.215 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:26.215 #define SPDK_CONFIG_IDXD 1 00:06:26.215 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:26.215 #undef SPDK_CONFIG_IPSEC_MB 00:06:26.215 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:26.215 #define SPDK_CONFIG_ISAL 1 00:06:26.215 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:26.215 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:26.215 #define SPDK_CONFIG_LIBDIR 00:06:26.215 #undef SPDK_CONFIG_LTO 00:06:26.215 #define SPDK_CONFIG_MAX_LCORES 00:06:26.215 #define SPDK_CONFIG_NVME_CUSE 1 00:06:26.215 #undef SPDK_CONFIG_OCF 00:06:26.215 #define SPDK_CONFIG_OCF_PATH 00:06:26.215 #define SPDK_CONFIG_OPENSSL_PATH 00:06:26.215 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:26.215 #define SPDK_CONFIG_PGO_DIR 00:06:26.215 #undef SPDK_CONFIG_PGO_USE 00:06:26.215 #define SPDK_CONFIG_PREFIX /usr/local 00:06:26.215 #undef SPDK_CONFIG_RAID5F 00:06:26.215 #undef SPDK_CONFIG_RBD 00:06:26.215 #define SPDK_CONFIG_RDMA 1 00:06:26.216 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:26.216 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:26.216 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:26.216 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:26.216 #undef SPDK_CONFIG_SHARED 00:06:26.216 #undef SPDK_CONFIG_SMA 00:06:26.216 #define SPDK_CONFIG_TESTS 1 00:06:26.216 #undef SPDK_CONFIG_TSAN 00:06:26.216 #define SPDK_CONFIG_UBLK 1 00:06:26.216 #define SPDK_CONFIG_UBSAN 1 00:06:26.216 #undef SPDK_CONFIG_UNIT_TESTS 00:06:26.216 #undef SPDK_CONFIG_URING 00:06:26.216 #define SPDK_CONFIG_URING_PATH 00:06:26.216 #undef SPDK_CONFIG_URING_ZNS 00:06:26.216 #undef SPDK_CONFIG_USDT 00:06:26.216 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:26.216 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:26.216 #define SPDK_CONFIG_VFIO_USER 1 00:06:26.216 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:26.216 #define SPDK_CONFIG_VHOST 1 00:06:26.216 #define SPDK_CONFIG_VIRTIO 1 00:06:26.216 #undef SPDK_CONFIG_VTUNE 00:06:26.216 #define SPDK_CONFIG_VTUNE_DIR 00:06:26.216 #define SPDK_CONFIG_WERROR 1 00:06:26.216 #define SPDK_CONFIG_WPDK_DIR 00:06:26.216 #undef SPDK_CONFIG_XNVME 00:06:26.216 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # : 1 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # : 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:26.216 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # : 1 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # : 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # : 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # : true 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # : 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # : 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@200 -- # cat 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:26.217 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3266077 ]] 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # kill -0 3266077 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pK6rVx 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.pK6rVx/tests/nvmf /tmp/spdk.pK6rVx 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # df -T 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=968232960 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4316196864 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=52498980864 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=9243324416 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866440192 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342489088 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5971968 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.218 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869630976 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1523712 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:26.219 * Looking for test storage... 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # target_space=52498980864 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # new_size=11457916928 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.219 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@389 -- # return 0 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # set -o errtrace 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1684 -- # true 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1686 -- # xtrace_fd 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.219 05:31:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:26.219 [2024-05-15 05:31:16.231105] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:26.219 [2024-05-15 05:31:16.231193] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266224 ] 00:06:26.479 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.479 [2024-05-15 05:31:16.406088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.479 [2024-05-15 05:31:16.471106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.738 [2024-05-15 05:31:16.530481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.738 [2024-05-15 05:31:16.546437] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:26.738 [2024-05-15 05:31:16.546856] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:26.738 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.738 INFO: Seed: 541247468 00:06:26.738 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:26.738 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:26.738 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:26.738 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.738 #2 INITED exec/s: 0 rss: 63Mb 00:06:26.738 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:26.738 This may also happen if the target rejected all inputs we tried so far 00:06:26.738 [2024-05-15 05:31:16.591969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:26.738 [2024-05-15 05:31:16.592002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.998 NEW_FUNC[1/686]: 0x481d20 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:26.998 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:26.998 #18 NEW cov: 11787 ft: 11788 corp: 2/73b lim: 320 exec/s: 0 rss: 70Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:06:26.998 [2024-05-15 05:31:16.923031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.998 [2024-05-15 05:31:16.923090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.998 #22 NEW cov: 11937 ft: 12503 corp: 3/190b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 4 InsertByte-CopyPart-EraseBytes-InsertRepeatedBytes- 00:06:26.998 [2024-05-15 05:31:16.962833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.998 [2024-05-15 05:31:16.962858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.998 #25 NEW cov: 11944 ft: 12820 corp: 4/267b lim: 320 exec/s: 0 rss: 70Mb L: 77/117 MS: 3 CMP-InsertByte-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\000\000"- 00:06:26.998 [2024-05-15 05:31:17.002922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:26.998 [2024-05-15 05:31:17.002948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.258 #26 NEW cov: 12029 ft: 13104 corp: 5/384b lim: 320 exec/s: 0 rss: 70Mb L: 117/117 MS: 1 ChangeBinInt- 00:06:27.258 [2024-05-15 05:31:17.053089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ff cdw10:00000000 cdw11:00000000 00:06:27.258 [2024-05-15 05:31:17.053114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.258 #29 NEW cov: 12029 ft: 13167 corp: 6/458b lim: 320 exec/s: 0 rss: 70Mb L: 74/117 MS: 3 CMP-ChangeBinInt-CrossOver- DE: "\000\000\000\000\000\000\000\000"- 00:06:27.258 [2024-05-15 05:31:17.093196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.258 [2024-05-15 05:31:17.093221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.258 #30 NEW cov: 12029 ft: 13280 corp: 7/530b lim: 320 exec/s: 0 rss: 70Mb L: 72/117 MS: 1 ChangeByte- 00:06:27.258 [2024-05-15 05:31:17.143316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.258 [2024-05-15 05:31:17.143341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.258 #31 NEW cov: 12029 ft: 13333 corp: 8/602b lim: 320 exec/s: 0 rss: 70Mb L: 72/117 MS: 1 CrossOver- 00:06:27.258 [2024-05-15 05:31:17.183468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.258 [2024-05-15 05:31:17.183494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.258 #32 NEW cov: 12029 ft: 13415 corp: 9/674b lim: 320 exec/s: 0 rss: 70Mb L: 72/117 MS: 1 ShuffleBytes- 00:06:27.258 [2024-05-15 05:31:17.223598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.258 [2024-05-15 05:31:17.223623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.258 #33 NEW cov: 12029 ft: 13442 corp: 10/746b lim: 320 exec/s: 0 rss: 70Mb L: 72/117 MS: 1 CopyPart- 00:06:27.258 [2024-05-15 05:31:17.273741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.258 [2024-05-15 05:31:17.273768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.517 #34 NEW cov: 12029 ft: 13478 corp: 11/863b lim: 320 exec/s: 0 rss: 71Mb L: 117/117 MS: 1 ShuffleBytes- 00:06:27.517 [2024-05-15 05:31:17.313832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.517 [2024-05-15 05:31:17.313858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.517 #45 NEW cov: 12029 ft: 13494 corp: 12/935b lim: 320 exec/s: 0 rss: 71Mb L: 72/117 MS: 1 ChangeByte- 00:06:27.517 [2024-05-15 05:31:17.353972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.517 [2024-05-15 05:31:17.353998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.517 #46 NEW cov: 12029 ft: 13520 corp: 13/1015b lim: 320 exec/s: 0 rss: 71Mb L: 80/117 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:27.517 [2024-05-15 05:31:17.404126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.517 [2024-05-15 05:31:17.404152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.517 #47 NEW cov: 12029 ft: 13529 corp: 14/1095b lim: 320 exec/s: 0 rss: 71Mb L: 80/117 MS: 1 ShuffleBytes- 00:06:27.517 [2024-05-15 05:31:17.454285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:800000 cdw10:00000000 cdw11:00000000 00:06:27.517 [2024-05-15 05:31:17.454311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.517 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:27.517 #48 NEW cov: 12052 ft: 13548 corp: 15/1212b lim: 320 exec/s: 0 rss: 71Mb L: 117/117 MS: 1 ChangeBit- 00:06:27.517 [2024-05-15 05:31:17.504392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.517 [2024-05-15 05:31:17.504418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.517 #49 NEW cov: 12052 ft: 13632 corp: 16/1329b lim: 320 exec/s: 0 rss: 71Mb L: 117/117 MS: 1 CopyPart- 00:06:27.776 [2024-05-15 05:31:17.554525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00ffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.776 [2024-05-15 05:31:17.554550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.776 #50 NEW cov: 12052 ft: 13640 corp: 17/1409b lim: 320 exec/s: 50 rss: 71Mb L: 80/117 MS: 1 CopyPart- 00:06:27.776 [2024-05-15 05:31:17.604681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.776 [2024-05-15 05:31:17.604706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.776 #51 NEW cov: 12052 ft: 13718 corp: 18/1489b lim: 320 exec/s: 51 rss: 71Mb L: 80/117 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:27.776 [2024-05-15 05:31:17.654819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.776 [2024-05-15 05:31:17.654850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.776 #52 NEW cov: 12052 ft: 13737 corp: 19/1606b lim: 320 exec/s: 52 rss: 71Mb L: 117/117 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:27.776 [2024-05-15 05:31:17.694972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:c2c2c2c2 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc2c2c2c2c2c2c2c2 00:06:27.776 [2024-05-15 05:31:17.694997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.776 [2024-05-15 05:31:17.695056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.776 [2024-05-15 05:31:17.695070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.776 #53 NEW cov: 12052 ft: 13931 corp: 20/1737b lim: 320 exec/s: 53 rss: 71Mb L: 131/131 MS: 1 InsertRepeatedBytes- 00:06:27.776 [2024-05-15 05:31:17.735002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.776 [2024-05-15 05:31:17.735027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.776 #54 NEW cov: 12052 ft: 13946 corp: 21/1822b lim: 320 exec/s: 54 rss: 72Mb L: 85/131 MS: 1 InsertRepeatedBytes- 00:06:27.776 [2024-05-15 05:31:17.775167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:27.776 [2024-05-15 05:31:17.775194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.776 #55 NEW cov: 12052 ft: 13998 corp: 22/1898b lim: 320 exec/s: 55 rss: 72Mb L: 76/131 MS: 1 CMP- DE: "\013\001\000\000"- 00:06:28.035 [2024-05-15 05:31:17.815271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:51000000 cdw10:00000000 cdw11:00000000 00:06:28.035 [2024-05-15 05:31:17.815296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.035 #56 NEW cov: 12052 ft: 14004 corp: 23/2016b lim: 320 exec/s: 56 rss: 72Mb L: 118/131 MS: 1 InsertByte- 00:06:28.035 [2024-05-15 05:31:17.865394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:76767676 cdw11:76767676 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7676767676767676 00:06:28.035 [2024-05-15 05:31:17.865418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.035 #58 NEW cov: 12052 ft: 14006 corp: 24/2129b lim: 320 exec/s: 58 rss: 72Mb L: 113/131 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:28.035 [2024-05-15 05:31:17.895542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.035 [2024-05-15 05:31:17.895567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.035 #59 NEW cov: 12052 ft: 14018 corp: 25/2231b lim: 320 exec/s: 59 rss: 72Mb L: 102/131 MS: 1 CopyPart- 00:06:28.035 [2024-05-15 05:31:17.935637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.035 [2024-05-15 05:31:17.935662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.035 #60 NEW cov: 12052 ft: 14081 corp: 26/2356b lim: 320 exec/s: 60 rss: 72Mb L: 125/131 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:28.035 [2024-05-15 05:31:17.975701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:00000000 00:06:28.035 [2024-05-15 05:31:17.975729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.035 #61 NEW cov: 12052 ft: 14101 corp: 27/2473b lim: 320 exec/s: 61 rss: 72Mb L: 117/131 MS: 1 ChangeBinInt- 00:06:28.035 [2024-05-15 05:31:18.025974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:28.035 [2024-05-15 05:31:18.025999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.036 [2024-05-15 05:31:18.026052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f0) qid:0 cid:5 nsid:f0f0f0f0 cdw10:f0f0f0f0 cdw11:f0f0f0f0 00:06:28.036 [2024-05-15 05:31:18.026065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.036 #67 NEW cov: 12052 ft: 14116 corp: 28/2631b lim: 320 exec/s: 67 rss: 72Mb L: 158/158 MS: 1 InsertRepeatedBytes- 00:06:28.295 [2024-05-15 05:31:18.065981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.295 [2024-05-15 05:31:18.066006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.295 #68 NEW cov: 12052 ft: 14129 corp: 29/2748b lim: 320 exec/s: 68 rss: 72Mb L: 117/158 MS: 1 ChangeBit- 00:06:28.295 [2024-05-15 05:31:18.106096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ff cdw10:00000000 cdw11:00010000 00:06:28.295 [2024-05-15 05:31:18.106121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.295 #69 NEW cov: 12052 ft: 14151 corp: 30/2815b lim: 320 exec/s: 69 rss: 72Mb L: 67/158 MS: 1 EraseBytes- 00:06:28.295 [2024-05-15 05:31:18.146176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff27ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:28.295 [2024-05-15 05:31:18.146201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.295 #70 NEW cov: 12052 ft: 14174 corp: 31/2882b lim: 320 exec/s: 70 rss: 72Mb L: 67/158 MS: 1 EraseBytes- 00:06:28.295 [2024-05-15 05:31:18.186299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.295 [2024-05-15 05:31:18.186324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.295 #71 NEW cov: 12052 ft: 14184 corp: 32/3000b lim: 320 exec/s: 71 rss: 72Mb L: 118/158 MS: 1 InsertByte- 00:06:28.295 [2024-05-15 05:31:18.226450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.295 [2024-05-15 05:31:18.226475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.295 #72 NEW cov: 12052 ft: 14223 corp: 33/3078b lim: 320 exec/s: 72 rss: 72Mb L: 78/158 MS: 1 InsertByte- 00:06:28.295 [2024-05-15 05:31:18.266803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (2c) qid:0 cid:4 nsid:0 cdw10:00ffffff cdw11:00000000 00:06:28.295 [2024-05-15 05:31:18.266827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.295 [2024-05-15 05:31:18.266887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.295 [2024-05-15 05:31:18.266902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.295 [2024-05-15 05:31:18.266953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.295 [2024-05-15 05:31:18.266969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.295 #73 NEW cov: 12052 ft: 14424 corp: 34/3283b lim: 320 exec/s: 73 rss: 72Mb L: 205/205 MS: 1 CopyPart- 00:06:28.554 [2024-05-15 05:31:18.316657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:4c4c4c4c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x4c4c4c4c4c4c4c4c 00:06:28.554 [2024-05-15 05:31:18.316683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.554 #74 NEW cov: 12052 ft: 14441 corp: 35/3391b lim: 320 exec/s: 74 rss: 72Mb L: 108/205 MS: 1 InsertRepeatedBytes- 00:06:28.554 [2024-05-15 05:31:18.366813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff000001 00:06:28.554 [2024-05-15 05:31:18.366840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.554 #75 NEW cov: 12052 ft: 14454 corp: 36/3467b lim: 320 exec/s: 75 rss: 73Mb L: 76/205 MS: 1 CopyPart- 00:06:28.554 [2024-05-15 05:31:18.416999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ff cdw10:c4000000 cdw11:c4c4c4c4 00:06:28.554 [2024-05-15 05:31:18.417025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.554 #76 NEW cov: 12052 ft: 14473 corp: 37/3552b lim: 320 exec/s: 76 rss: 73Mb L: 85/205 MS: 1 InsertRepeatedBytes- 00:06:28.554 [2024-05-15 05:31:18.457079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:c2c2c2c2 cdw10:c2c2c2c2 cdw11:c2c2c2c2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc2c2c2c2c2c2c2c2 00:06:28.554 [2024-05-15 05:31:18.457104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.554 #77 NEW cov: 12052 ft: 14477 corp: 38/3638b lim: 320 exec/s: 77 rss: 73Mb L: 86/205 MS: 1 EraseBytes- 00:06:28.554 [2024-05-15 05:31:18.507271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ff cdw10:00000000 cdw11:00010000 00:06:28.554 [2024-05-15 05:31:18.507296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.554 #78 NEW cov: 12052 ft: 14483 corp: 39/3705b lim: 320 exec/s: 78 rss: 73Mb L: 67/205 MS: 1 ShuffleBytes- 00:06:28.554 [2024-05-15 05:31:18.557386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.554 [2024-05-15 05:31:18.557411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.814 #79 NEW cov: 12052 ft: 14488 corp: 40/3785b lim: 320 exec/s: 79 rss: 73Mb L: 80/205 MS: 1 CrossOver- 00:06:28.814 [2024-05-15 05:31:18.597526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.814 [2024-05-15 05:31:18.597551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.814 #85 NEW cov: 12052 ft: 14490 corp: 41/3863b lim: 320 exec/s: 42 rss: 73Mb L: 78/205 MS: 1 ChangeBit- 00:06:28.814 #85 DONE cov: 12052 ft: 14490 corp: 41/3863b lim: 320 exec/s: 42 rss: 73Mb 00:06:28.814 ###### Recommended dictionary. ###### 00:06:28.814 "\000\000\000\000\000\000\000\000" # Uses: 7 00:06:28.814 "\013\001\000\000" # Uses: 0 00:06:28.814 ###### End of recommended dictionary. ###### 00:06:28.814 Done 85 runs in 2 second(s) 00:06:28.814 [2024-05-15 05:31:18.626891] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.814 05:31:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:28.814 [2024-05-15 05:31:18.796151] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:28.814 [2024-05-15 05:31:18.796224] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266515 ] 00:06:28.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.074 [2024-05-15 05:31:18.982622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.074 [2024-05-15 05:31:19.048845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.334 [2024-05-15 05:31:19.107937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.334 [2024-05-15 05:31:19.123885] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:29.334 [2024-05-15 05:31:19.124303] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:29.334 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.334 INFO: Seed: 3118252825 00:06:29.334 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:29.334 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:29.334 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:29.334 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.334 #2 INITED exec/s: 0 rss: 64Mb 00:06:29.334 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.334 This may also happen if the target rejected all inputs we tried so far 00:06:29.334 [2024-05-15 05:31:19.179315] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.334 [2024-05-15 05:31:19.179630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.334 [2024-05-15 05:31:19.179661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.334 [2024-05-15 05:31:19.179722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.334 [2024-05-15 05:31:19.179736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.594 NEW_FUNC[1/685]: 0x482620 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:29.594 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.594 #3 NEW cov: 11884 ft: 11883 corp: 2/15b lim: 30 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:06:29.594 [2024-05-15 05:31:19.490182] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:29.594 [2024-05-15 05:31:19.490304] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:29.594 [2024-05-15 05:31:19.490432] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:29.594 [2024-05-15 05:31:19.490532] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:29.594 [2024-05-15 05:31:19.490747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.490787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.594 [2024-05-15 05:31:19.490851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.490870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.594 [2024-05-15 05:31:19.490930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.490948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.594 [2024-05-15 05:31:19.491009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.491028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.594 NEW_FUNC[1/1]: 0x17482d0 in nvme_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1141 00:06:29.594 #6 NEW cov: 12023 ft: 13006 corp: 3/39b lim: 30 exec/s: 0 rss: 70Mb L: 24/24 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:06:29.594 [2024-05-15 05:31:19.530074] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.594 [2024-05-15 05:31:19.530287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.530314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.594 #7 NEW cov: 12029 ft: 13591 corp: 4/47b lim: 30 exec/s: 0 rss: 70Mb L: 8/24 MS: 1 EraseBytes- 00:06:29.594 [2024-05-15 05:31:19.580261] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.594 [2024-05-15 05:31:19.580369] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.594 [2024-05-15 05:31:19.580473] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.594 [2024-05-15 05:31:19.580667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.580693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.594 [2024-05-15 05:31:19.580751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.580766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.594 [2024-05-15 05:31:19.580820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.594 [2024-05-15 05:31:19.580833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.594 #12 NEW cov: 12114 ft: 14125 corp: 5/69b lim: 30 exec/s: 0 rss: 71Mb L: 22/24 MS: 5 CrossOver-ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:29.854 [2024-05-15 05:31:19.620391] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:29.854 [2024-05-15 05:31:19.620604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.854 [2024-05-15 05:31:19.620629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.854 #13 NEW cov: 12114 ft: 14206 corp: 6/75b lim: 30 exec/s: 0 rss: 71Mb L: 6/24 MS: 1 EraseBytes- 00:06:29.854 [2024-05-15 05:31:19.670459] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.854 [2024-05-15 05:31:19.670571] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0a 00:06:29.854 [2024-05-15 05:31:19.670763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.854 [2024-05-15 05:31:19.670788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.854 [2024-05-15 05:31:19.670845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.854 [2024-05-15 05:31:19.670860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.854 #14 NEW cov: 12114 ft: 14322 corp: 7/88b lim: 30 exec/s: 0 rss: 71Mb L: 13/24 MS: 1 EraseBytes- 00:06:29.854 [2024-05-15 05:31:19.720722] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.854 [2024-05-15 05:31:19.720834] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.854 [2024-05-15 05:31:19.720943] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14396) > buf size (4096) 00:06:29.854 [2024-05-15 05:31:19.721178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.854 [2024-05-15 05:31:19.721204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.855 [2024-05-15 05:31:19.721259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.721274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.855 [2024-05-15 05:31:19.721327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0e0e000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.721341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.855 #15 NEW cov: 12114 ft: 14494 corp: 8/110b lim: 30 exec/s: 0 rss: 71Mb L: 22/24 MS: 1 CrossOver- 00:06:29.855 [2024-05-15 05:31:19.760775] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.855 [2024-05-15 05:31:19.760885] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14380) > buf size (4096) 00:06:29.855 [2024-05-15 05:31:19.760993] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.855 [2024-05-15 05:31:19.761187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.761213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.855 [2024-05-15 05:31:19.761269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.761283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.855 [2024-05-15 05:31:19.761336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:000e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.761349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.855 #16 NEW cov: 12114 ft: 14681 corp: 9/131b lim: 30 exec/s: 0 rss: 71Mb L: 21/24 MS: 1 CrossOver- 00:06:29.855 [2024-05-15 05:31:19.800858] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.855 [2024-05-15 05:31:19.800972] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e87 00:06:29.855 [2024-05-15 05:31:19.801185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.801210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.855 [2024-05-15 05:31:19.801266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.801281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.855 #17 NEW cov: 12114 ft: 14713 corp: 10/144b lim: 30 exec/s: 0 rss: 71Mb L: 13/24 MS: 1 ChangeByte- 00:06:29.855 [2024-05-15 05:31:19.851010] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:29.855 [2024-05-15 05:31:19.851120] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e87 00:06:29.855 [2024-05-15 05:31:19.851320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.851346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.855 [2024-05-15 05:31:19.851423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:29.855 [2024-05-15 05:31:19.851439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.115 #18 NEW cov: 12114 ft: 14749 corp: 11/157b lim: 30 exec/s: 0 rss: 71Mb L: 13/24 MS: 1 CopyPart- 00:06:30.115 [2024-05-15 05:31:19.901098] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.115 [2024-05-15 05:31:19.901299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:19.901323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 #19 NEW cov: 12114 ft: 14776 corp: 12/165b lim: 30 exec/s: 0 rss: 71Mb L: 8/24 MS: 1 EraseBytes- 00:06:30.115 [2024-05-15 05:31:19.941252] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:30.115 [2024-05-15 05:31:19.941469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:19.941499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 #20 NEW cov: 12114 ft: 14830 corp: 13/173b lim: 30 exec/s: 0 rss: 71Mb L: 8/24 MS: 1 CMP- DE: "\377\377\377\377\001,.\015"- 00:06:30.115 [2024-05-15 05:31:19.981415] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.115 [2024-05-15 05:31:19.981527] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (538684) > buf size (4096) 00:06:30.115 [2024-05-15 05:31:19.981631] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.115 [2024-05-15 05:31:19.981849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:19.981875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 [2024-05-15 05:31:19.981935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:19.981950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.115 [2024-05-15 05:31:19.982003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:000e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:19.982017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.115 #21 NEW cov: 12114 ft: 14855 corp: 14/195b lim: 30 exec/s: 0 rss: 71Mb L: 22/24 MS: 1 ChangeBinInt- 00:06:30.115 [2024-05-15 05:31:20.021527] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff01 00:06:30.115 [2024-05-15 05:31:20.021653] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000e0e 00:06:30.115 [2024-05-15 05:31:20.021883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:20.021912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 [2024-05-15 05:31:20.021969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2c2e830d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:20.021984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.115 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:30.115 #27 NEW cov: 12137 ft: 14885 corp: 15/211b lim: 30 exec/s: 0 rss: 72Mb L: 16/24 MS: 1 PersAutoDict- DE: "\377\377\377\377\001,.\015"- 00:06:30.115 [2024-05-15 05:31:20.071859] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:30.115 [2024-05-15 05:31:20.072309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:20.072366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.115 #30 NEW cov: 12137 ft: 14930 corp: 16/222b lim: 30 exec/s: 0 rss: 72Mb L: 11/24 MS: 3 InsertByte-InsertByte-PersAutoDict- DE: "\377\377\377\377\001,.\015"- 00:06:30.115 [2024-05-15 05:31:20.112002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.115 [2024-05-15 05:31:20.112030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 #31 NEW cov: 12137 ft: 14989 corp: 17/230b lim: 30 exec/s: 0 rss: 72Mb L: 8/24 MS: 1 ChangeBinInt- 00:06:30.375 [2024-05-15 05:31:20.161943] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.375 [2024-05-15 05:31:20.162063] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0a 00:06:30.375 [2024-05-15 05:31:20.162267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.162293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.162351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.162366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.375 #32 NEW cov: 12137 ft: 15065 corp: 18/243b lim: 30 exec/s: 32 rss: 72Mb L: 13/24 MS: 1 ShuffleBytes- 00:06:30.375 [2024-05-15 05:31:20.202032] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:30.375 [2024-05-15 05:31:20.202336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.202361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.202423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.202438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.375 #33 NEW cov: 12137 ft: 15077 corp: 19/259b lim: 30 exec/s: 33 rss: 72Mb L: 16/24 MS: 1 CopyPart- 00:06:30.375 [2024-05-15 05:31:20.242187] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff01 00:06:30.375 [2024-05-15 05:31:20.242303] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:30.375 [2024-05-15 05:31:20.242413] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000e0e 00:06:30.375 [2024-05-15 05:31:20.242629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.242656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.242714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.242728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.242784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff2e830d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.242798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.375 #34 NEW cov: 12137 ft: 15081 corp: 20/281b lim: 30 exec/s: 34 rss: 72Mb L: 22/24 MS: 1 InsertRepeatedBytes- 00:06:30.375 [2024-05-15 05:31:20.292296] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:30.375 [2024-05-15 05:31:20.292412] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000b0e 00:06:30.375 [2024-05-15 05:31:20.292517] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000000a 00:06:30.375 [2024-05-15 05:31:20.292713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.292738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.292798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.292812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.292864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0e0e0200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.292877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.375 #35 NEW cov: 12137 ft: 15129 corp: 21/300b lim: 30 exec/s: 35 rss: 72Mb L: 19/24 MS: 1 CrossOver- 00:06:30.375 [2024-05-15 05:31:20.332318] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.375 [2024-05-15 05:31:20.332522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.332548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 #36 NEW cov: 12137 ft: 15191 corp: 22/311b lim: 30 exec/s: 36 rss: 72Mb L: 11/24 MS: 1 EraseBytes- 00:06:30.375 [2024-05-15 05:31:20.382514] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0e 00:06:30.375 [2024-05-15 05:31:20.382628] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000e0e 00:06:30.375 [2024-05-15 05:31:20.382825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.382851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.375 [2024-05-15 05:31:20.382902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.375 [2024-05-15 05:31:20.382917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.635 #37 NEW cov: 12137 ft: 15222 corp: 23/327b lim: 30 exec/s: 37 rss: 72Mb L: 16/24 MS: 1 CopyPart- 00:06:30.635 [2024-05-15 05:31:20.422605] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:30.635 [2024-05-15 05:31:20.422820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.635 [2024-05-15 05:31:20.422845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.635 #38 NEW cov: 12137 ft: 15240 corp: 24/335b lim: 30 exec/s: 38 rss: 72Mb L: 8/24 MS: 1 CMP- DE: "\200\000"- 00:06:30.635 [2024-05-15 05:31:20.462727] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff23 00:06:30.635 [2024-05-15 05:31:20.462841] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000e0e 00:06:30.635 [2024-05-15 05:31:20.463056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.635 [2024-05-15 05:31:20.463082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.635 [2024-05-15 05:31:20.463135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.635 [2024-05-15 05:31:20.463149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.635 #39 NEW cov: 12137 ft: 15241 corp: 25/351b lim: 30 exec/s: 39 rss: 72Mb L: 16/24 MS: 1 ChangeByte- 00:06:30.635 [2024-05-15 05:31:20.502854] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.635 [2024-05-15 05:31:20.502964] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14396) > buf size (4096) 00:06:30.635 [2024-05-15 05:31:20.503066] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.635 [2024-05-15 05:31:20.503260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a1d020b cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.635 [2024-05-15 05:31:20.503286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.635 [2024-05-15 05:31:20.503343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.503357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.636 [2024-05-15 05:31:20.503413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0000020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.503427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.636 #40 NEW cov: 12137 ft: 15249 corp: 26/373b lim: 30 exec/s: 40 rss: 72Mb L: 22/24 MS: 1 InsertByte- 00:06:30.636 [2024-05-15 05:31:20.542929] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:30.636 [2024-05-15 05:31:20.543041] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (834560) > buf size (4096) 00:06:30.636 [2024-05-15 05:31:20.543253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.543280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 [2024-05-15 05:31:20.543334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:2eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.543349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.636 #41 NEW cov: 12137 ft: 15265 corp: 27/388b lim: 30 exec/s: 41 rss: 72Mb L: 15/24 MS: 1 CopyPart- 00:06:30.636 [2024-05-15 05:31:20.583016] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000aff 00:06:30.636 [2024-05-15 05:31:20.583125] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002c2e 00:06:30.636 [2024-05-15 05:31:20.583321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.583346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 [2024-05-15 05:31:20.583405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.583420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.636 #42 NEW cov: 12137 ft: 15274 corp: 28/404b lim: 30 exec/s: 42 rss: 72Mb L: 16/24 MS: 1 CrossOver- 00:06:30.636 [2024-05-15 05:31:20.633297] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff23 00:06:30.636 [2024-05-15 05:31:20.633416] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.636 [2024-05-15 05:31:20.633521] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.636 [2024-05-15 05:31:20.633623] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0a 00:06:30.636 [2024-05-15 05:31:20.633824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.633850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.636 [2024-05-15 05:31:20.633911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.633926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.636 [2024-05-15 05:31:20.633976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.633990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.636 [2024-05-15 05:31:20.634043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0b0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.636 [2024-05-15 05:31:20.634056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.896 #43 NEW cov: 12137 ft: 15293 corp: 29/429b lim: 30 exec/s: 43 rss: 72Mb L: 25/25 MS: 1 CrossOver- 00:06:30.896 [2024-05-15 05:31:20.683347] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bf0e 00:06:30.896 [2024-05-15 05:31:20.683464] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000e0e 00:06:30.896 [2024-05-15 05:31:20.683664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.683691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.683744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.683758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.896 #44 NEW cov: 12137 ft: 15319 corp: 30/445b lim: 30 exec/s: 44 rss: 72Mb L: 16/25 MS: 1 ChangeBit- 00:06:30.896 [2024-05-15 05:31:20.723492] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:30.896 [2024-05-15 05:31:20.723607] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:30.896 [2024-05-15 05:31:20.723997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.724023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.724082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.724097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.724150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.724164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.724217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.724230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.896 #45 NEW cov: 12137 ft: 15365 corp: 31/473b lim: 30 exec/s: 45 rss: 72Mb L: 28/28 MS: 1 CopyPart- 00:06:30.896 [2024-05-15 05:31:20.763546] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200008e0e 00:06:30.896 [2024-05-15 05:31:20.763660] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:30.896 [2024-05-15 05:31:20.763768] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14396) > buf size (4096) 00:06:30.896 [2024-05-15 05:31:20.763983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.764010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.764065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.764079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.764132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0e0e000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.764147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.896 #46 NEW cov: 12137 ft: 15375 corp: 32/495b lim: 30 exec/s: 46 rss: 72Mb L: 22/28 MS: 1 ChangeBit- 00:06:30.896 [2024-05-15 05:31:20.813768] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:30.896 [2024-05-15 05:31:20.813881] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2e0d 00:06:30.896 [2024-05-15 05:31:20.813985] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000d0b 00:06:30.896 [2024-05-15 05:31:20.814086] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000a0b 00:06:30.896 [2024-05-15 05:31:20.814293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.814318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.814373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.814392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.814445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff01022c cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.814458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.896 [2024-05-15 05:31:20.814510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.896 [2024-05-15 05:31:20.814524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.896 #47 NEW cov: 12137 ft: 15392 corp: 33/519b lim: 30 exec/s: 47 rss: 72Mb L: 24/28 MS: 1 PersAutoDict- DE: "\377\377\377\377\001,.\015"- 00:06:30.896 [2024-05-15 05:31:20.853800] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:30.897 [2024-05-15 05:31:20.853912] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000e0e 00:06:30.897 [2024-05-15 05:31:20.854113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:2cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.897 [2024-05-15 05:31:20.854139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.897 [2024-05-15 05:31:20.854193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ff2e830d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.897 [2024-05-15 05:31:20.854208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.897 #48 NEW cov: 12137 ft: 15397 corp: 34/535b lim: 30 exec/s: 48 rss: 72Mb L: 16/28 MS: 1 EraseBytes- 00:06:30.897 [2024-05-15 05:31:20.904097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:30.897 [2024-05-15 05:31:20.904123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.157 #49 NEW cov: 12137 ft: 15405 corp: 35/542b lim: 30 exec/s: 49 rss: 72Mb L: 7/28 MS: 1 EraseBytes- 00:06:31.157 [2024-05-15 05:31:20.954075] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:06:31.157 [2024-05-15 05:31:20.954282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:20.954307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.157 #50 NEW cov: 12137 ft: 15408 corp: 36/550b lim: 30 exec/s: 50 rss: 72Mb L: 8/28 MS: 1 ChangeBinInt- 00:06:31.157 [2024-05-15 05:31:20.994193] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000aff 00:06:31.157 [2024-05-15 05:31:20.994305] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002c2e 00:06:31.157 [2024-05-15 05:31:20.994512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:20.994538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.157 [2024-05-15 05:31:20.994595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:20.994609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.157 #51 NEW cov: 12137 ft: 15417 corp: 37/566b lim: 30 exec/s: 51 rss: 72Mb L: 16/28 MS: 1 ChangeBit- 00:06:31.157 [2024-05-15 05:31:21.044332] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:31.157 [2024-05-15 05:31:21.044559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0b020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:21.044585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.157 #52 NEW cov: 12137 ft: 15470 corp: 38/576b lim: 30 exec/s: 52 rss: 72Mb L: 10/28 MS: 1 EraseBytes- 00:06:31.157 [2024-05-15 05:31:21.094520] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000e0e 00:06:31.157 [2024-05-15 05:31:21.094631] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200000aff 00:06:31.157 [2024-05-15 05:31:21.094736] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100002c2e 00:06:31.157 [2024-05-15 05:31:21.094932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0e020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:21.094958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.157 [2024-05-15 05:31:21.095014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0e87020e cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:21.095029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.157 [2024-05-15 05:31:21.095081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:21.095094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.157 #53 NEW cov: 12137 ft: 15473 corp: 39/598b lim: 30 exec/s: 53 rss: 72Mb L: 22/28 MS: 1 CrossOver- 00:06:31.157 [2024-05-15 05:31:21.134739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.157 [2024-05-15 05:31:21.134763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.157 #54 NEW cov: 12137 ft: 15484 corp: 40/605b lim: 30 exec/s: 27 rss: 72Mb L: 7/28 MS: 1 ShuffleBytes- 00:06:31.157 #54 DONE cov: 12137 ft: 15484 corp: 40/605b lim: 30 exec/s: 27 rss: 72Mb 00:06:31.157 ###### Recommended dictionary. ###### 00:06:31.157 "\377\377\377\377\001,.\015" # Uses: 3 00:06:31.157 "\200\000" # Uses: 0 00:06:31.157 ###### End of recommended dictionary. ###### 00:06:31.157 Done 54 runs in 2 second(s) 00:06:31.157 [2024-05-15 05:31:21.164629] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.417 05:31:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:31.417 [2024-05-15 05:31:21.333904] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:31.417 [2024-05-15 05:31:21.334000] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267046 ] 00:06:31.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.676 [2024-05-15 05:31:21.510173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.676 [2024-05-15 05:31:21.575360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.676 [2024-05-15 05:31:21.634287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.676 [2024-05-15 05:31:21.650238] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:31.676 [2024-05-15 05:31:21.650669] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:31.676 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.676 INFO: Seed: 1348281903 00:06:31.676 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:31.676 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:31.676 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:31.676 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.676 #2 INITED exec/s: 0 rss: 63Mb 00:06:31.676 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.676 This may also happen if the target rejected all inputs we tried so far 00:06:31.935 [2024-05-15 05:31:21.699885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:080a0043 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.935 [2024-05-15 05:31:21.699914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.195 NEW_FUNC[1/685]: 0x4850d0 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:32.195 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:32.195 #27 NEW cov: 11809 ft: 11796 corp: 2/9b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 5 CopyPart-InsertByte-ChangeBit-CrossOver-CopyPart- 00:06:32.195 [2024-05-15 05:31:22.041700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:080a0043 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.195 [2024-05-15 05:31:22.041751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.195 #28 NEW cov: 11939 ft: 12488 corp: 3/17b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 CrossOver- 00:06:32.195 [2024-05-15 05:31:22.101694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08430043 cdw11:08000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.195 [2024-05-15 05:31:22.101723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.195 #29 NEW cov: 11945 ft: 12840 corp: 4/25b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:32.195 [2024-05-15 05:31:22.151855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.195 [2024-05-15 05:31:22.151883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.195 #30 NEW cov: 12030 ft: 13135 corp: 5/33b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ShuffleBytes- 00:06:32.195 [2024-05-15 05:31:22.212134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08430043 cdw11:00000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.195 [2024-05-15 05:31:22.212164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.454 #31 NEW cov: 12030 ft: 13194 corp: 6/41b lim: 35 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:32.454 [2024-05-15 05:31:22.271997] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:32.454 [2024-05-15 05:31:22.272404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:43080000 cdw11:00004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.454 [2024-05-15 05:31:22.272439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.454 #32 NEW cov: 12039 ft: 13280 corp: 7/50b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CopyPart- 00:06:32.454 [2024-05-15 05:31:22.332488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:430a0043 cdw11:0800080a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.455 [2024-05-15 05:31:22.332518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.455 #33 NEW cov: 12039 ft: 13326 corp: 8/58b lim: 35 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 ShuffleBytes- 00:06:32.455 [2024-05-15 05:31:22.392625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:430a0043 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.455 [2024-05-15 05:31:22.392651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.455 #34 NEW cov: 12039 ft: 13349 corp: 9/66b lim: 35 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 ChangeBinInt- 00:06:32.455 [2024-05-15 05:31:22.452867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a080043 cdw11:08000a08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.455 [2024-05-15 05:31:22.452896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.455 #35 NEW cov: 12039 ft: 13498 corp: 10/73b lim: 35 exec/s: 0 rss: 71Mb L: 7/9 MS: 1 EraseBytes- 00:06:32.714 [2024-05-15 05:31:22.502908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:08000a43 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.714 [2024-05-15 05:31:22.502936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.714 #36 NEW cov: 12039 ft: 13570 corp: 11/81b lim: 35 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 ShuffleBytes- 00:06:32.714 [2024-05-15 05:31:22.553057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:430a0043 cdw11:0800080a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.714 [2024-05-15 05:31:22.553085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.714 #37 NEW cov: 12039 ft: 13579 corp: 12/89b lim: 35 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 ChangeBinInt- 00:06:32.714 [2024-05-15 05:31:22.603281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a080043 cdw11:08000a08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.714 [2024-05-15 05:31:22.603309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.714 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:32.714 #38 NEW cov: 12062 ft: 13643 corp: 13/97b lim: 35 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 InsertByte- 00:06:32.714 [2024-05-15 05:31:22.663404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a080043 cdw11:08000a08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.714 [2024-05-15 05:31:22.663432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.714 #39 NEW cov: 12062 ft: 13659 corp: 14/105b lim: 35 exec/s: 39 rss: 71Mb L: 8/9 MS: 1 ChangeByte- 00:06:32.714 [2024-05-15 05:31:22.723624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.714 [2024-05-15 05:31:22.723651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.973 #40 NEW cov: 12062 ft: 13675 corp: 15/113b lim: 35 exec/s: 40 rss: 71Mb L: 8/9 MS: 1 CrossOver- 00:06:32.973 [2024-05-15 05:31:22.773735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a430008 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.973 [2024-05-15 05:31:22.773762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.973 #41 NEW cov: 12062 ft: 13683 corp: 16/121b lim: 35 exec/s: 41 rss: 71Mb L: 8/9 MS: 1 ChangeBit- 00:06:32.973 [2024-05-15 05:31:22.823948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:430a0043 cdw11:0800087a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.973 [2024-05-15 05:31:22.823979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.973 #42 NEW cov: 12062 ft: 13722 corp: 17/129b lim: 35 exec/s: 42 rss: 71Mb L: 8/9 MS: 1 ChangeByte- 00:06:32.973 [2024-05-15 05:31:22.884135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:ff0043ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.973 [2024-05-15 05:31:22.884164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.973 #43 NEW cov: 12062 ft: 13740 corp: 18/137b lim: 35 exec/s: 43 rss: 71Mb L: 8/9 MS: 1 CMP- DE: "\377\377\377\327"- 00:06:32.973 [2024-05-15 05:31:22.934231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:32.973 [2024-05-15 05:31:22.934259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.973 #44 NEW cov: 12062 ft: 13764 corp: 19/150b lim: 35 exec/s: 44 rss: 71Mb L: 13/13 MS: 1 CopyPart- 00:06:33.232 [2024-05-15 05:31:22.994490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:430a0043 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.232 [2024-05-15 05:31:22.994534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.232 #45 NEW cov: 12062 ft: 13793 corp: 20/159b lim: 35 exec/s: 45 rss: 72Mb L: 9/13 MS: 1 InsertByte- 00:06:33.232 [2024-05-15 05:31:23.054742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3a430008 cdw11:0a004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.232 [2024-05-15 05:31:23.054770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.232 #46 NEW cov: 12062 ft: 13814 corp: 21/167b lim: 35 exec/s: 46 rss: 72Mb L: 8/13 MS: 1 ChangeByte- 00:06:33.232 [2024-05-15 05:31:23.104885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aff0008 cdw11:4300ff43 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.232 [2024-05-15 05:31:23.104912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.232 #47 NEW cov: 12062 ft: 13818 corp: 22/177b lim: 35 exec/s: 47 rss: 72Mb L: 10/13 MS: 1 CopyPart- 00:06:33.232 [2024-05-15 05:31:23.165013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a0a0008 cdw11:ff00080a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.232 [2024-05-15 05:31:23.165040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.232 #49 NEW cov: 12062 ft: 13824 corp: 23/185b lim: 35 exec/s: 49 rss: 72Mb L: 8/13 MS: 2 EraseBytes-CrossOver- 00:06:33.232 [2024-05-15 05:31:23.225265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:08000a43 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.232 [2024-05-15 05:31:23.225292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.490 #50 NEW cov: 12062 ft: 13844 corp: 24/197b lim: 35 exec/s: 50 rss: 72Mb L: 12/13 MS: 1 CopyPart- 00:06:33.491 [2024-05-15 05:31:23.285515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0008 cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.491 [2024-05-15 05:31:23.285543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.491 #51 NEW cov: 12062 ft: 13857 corp: 25/209b lim: 35 exec/s: 51 rss: 72Mb L: 12/13 MS: 1 InsertRepeatedBytes- 00:06:33.491 [2024-05-15 05:31:23.335601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:08430043 cdw11:ff0008ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.491 [2024-05-15 05:31:23.335630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.491 #52 NEW cov: 12062 ft: 13864 corp: 26/221b lim: 35 exec/s: 52 rss: 72Mb L: 12/13 MS: 1 PersAutoDict- DE: "\377\377\377\327"- 00:06:33.491 [2024-05-15 05:31:23.385806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a080043 cdw11:08000a08 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.491 [2024-05-15 05:31:23.385835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.491 #53 NEW cov: 12062 ft: 13880 corp: 27/228b lim: 35 exec/s: 53 rss: 72Mb L: 7/13 MS: 1 ChangeBit- 00:06:33.491 [2024-05-15 05:31:23.435942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a430008 cdw11:ff0043ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.491 [2024-05-15 05:31:23.435970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.491 #54 NEW cov: 12062 ft: 13934 corp: 28/236b lim: 35 exec/s: 54 rss: 72Mb L: 8/13 MS: 1 ChangeByte- 00:06:33.491 [2024-05-15 05:31:23.485706] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:33.491 [2024-05-15 05:31:23.486102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:43f40000 cdw11:00004308 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.491 [2024-05-15 05:31:23.486137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.750 #55 NEW cov: 12062 ft: 13956 corp: 29/245b lim: 35 exec/s: 55 rss: 72Mb L: 9/13 MS: 1 ChangeByte- 00:06:33.750 [2024-05-15 05:31:23.546988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a080043 cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.547016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.750 [2024-05-15 05:31:23.547170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.547187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.750 [2024-05-15 05:31:23.547328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.547346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.750 [2024-05-15 05:31:23.547489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.547507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.750 #56 NEW cov: 12062 ft: 14595 corp: 30/276b lim: 35 exec/s: 56 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:33.750 [2024-05-15 05:31:23.596715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:8400d784 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.596742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.750 [2024-05-15 05:31:23.596899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:84840084 cdw11:84008484 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.596919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.750 #60 NEW cov: 12062 ft: 14798 corp: 31/294b lim: 35 exec/s: 60 rss: 72Mb L: 18/31 MS: 4 EraseBytes-PersAutoDict-PersAutoDict-InsertRepeatedBytes- DE: "\377\377\377\327"-"\377\377\377\327"- 00:06:33.750 [2024-05-15 05:31:23.646659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff0008 cdw11:0a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.750 [2024-05-15 05:31:23.646692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.750 #61 NEW cov: 12062 ft: 14820 corp: 32/306b lim: 35 exec/s: 61 rss: 72Mb L: 12/31 MS: 1 ChangeBinInt- 00:06:33.750 [2024-05-15 05:31:23.706897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:430a0043 cdw11:08000808 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.751 [2024-05-15 05:31:23.706924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.751 #62 NEW cov: 12062 ft: 14834 corp: 33/315b lim: 35 exec/s: 31 rss: 73Mb L: 9/31 MS: 1 ChangeByte- 00:06:33.751 #62 DONE cov: 12062 ft: 14834 corp: 33/315b lim: 35 exec/s: 31 rss: 73Mb 00:06:33.751 ###### Recommended dictionary. ###### 00:06:33.751 "\377\377\377\327" # Uses: 3 00:06:33.751 ###### End of recommended dictionary. ###### 00:06:33.751 Done 62 runs in 2 second(s) 00:06:33.751 [2024-05-15 05:31:23.737229] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.010 05:31:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:34.010 [2024-05-15 05:31:23.904950] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:34.010 [2024-05-15 05:31:23.905024] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267575 ] 00:06:34.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.269 [2024-05-15 05:31:24.082223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.269 [2024-05-15 05:31:24.147676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.269 [2024-05-15 05:31:24.206862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.269 [2024-05-15 05:31:24.222835] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:34.269 [2024-05-15 05:31:24.223226] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:34.269 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.269 INFO: Seed: 3922283905 00:06:34.269 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:34.269 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:34.269 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:34.269 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.269 #2 INITED exec/s: 0 rss: 64Mb 00:06:34.269 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.269 This may also happen if the target rejected all inputs we tried so far 00:06:34.788 NEW_FUNC[1/674]: 0x486da0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:34.788 NEW_FUNC[2/674]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.788 #8 NEW cov: 11706 ft: 11707 corp: 2/6b lim: 20 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "~\000\000\000"- 00:06:34.788 #14 NEW cov: 11836 ft: 12394 corp: 3/11b lim: 20 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:06:34.788 #15 NEW cov: 11842 ft: 12738 corp: 4/16b lim: 20 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ChangeBit- 00:06:34.788 #19 NEW cov: 11944 ft: 13279 corp: 5/28b lim: 20 exec/s: 0 rss: 70Mb L: 12/12 MS: 4 ChangeBit-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:06:34.788 #20 NEW cov: 11944 ft: 13362 corp: 6/34b lim: 20 exec/s: 0 rss: 70Mb L: 6/12 MS: 1 CrossOver- 00:06:34.788 #21 NEW cov: 11944 ft: 13405 corp: 7/40b lim: 20 exec/s: 0 rss: 70Mb L: 6/12 MS: 1 InsertByte- 00:06:35.047 #34 NEW cov: 11944 ft: 13538 corp: 8/44b lim: 20 exec/s: 0 rss: 71Mb L: 4/12 MS: 3 EraseBytes-CrossOver-InsertByte- 00:06:35.047 #35 NEW cov: 11944 ft: 13554 corp: 9/49b lim: 20 exec/s: 0 rss: 71Mb L: 5/12 MS: 1 ChangeByte- 00:06:35.047 #36 NEW cov: 11944 ft: 13661 corp: 10/54b lim: 20 exec/s: 0 rss: 71Mb L: 5/12 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:06:35.047 #37 NEW cov: 11945 ft: 13886 corp: 11/63b lim: 20 exec/s: 0 rss: 71Mb L: 9/12 MS: 1 CrossOver- 00:06:35.047 #38 NEW cov: 11945 ft: 13907 corp: 12/72b lim: 20 exec/s: 0 rss: 71Mb L: 9/12 MS: 1 CrossOver- 00:06:35.047 #39 NEW cov: 11945 ft: 13934 corp: 13/83b lim: 20 exec/s: 0 rss: 71Mb L: 11/12 MS: 1 CopyPart- 00:06:35.047 [2024-05-15 05:31:25.060867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.047 [2024-05-15 05:31:25.060905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.307 NEW_FUNC[1/20]: 0x11850d0 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3333 00:06:35.307 NEW_FUNC[2/20]: 0x1185c50 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3275 00:06:35.307 #40 NEW cov: 12284 ft: 14451 corp: 14/101b lim: 20 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 InsertRepeatedBytes- 00:06:35.307 #42 NEW cov: 12284 ft: 14464 corp: 15/108b lim: 20 exec/s: 0 rss: 71Mb L: 7/18 MS: 2 EraseBytes-PersAutoDict- DE: "~\000\000\000"- 00:06:35.307 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:35.307 #43 NEW cov: 12307 ft: 14541 corp: 16/115b lim: 20 exec/s: 0 rss: 71Mb L: 7/18 MS: 1 ChangeBit- 00:06:35.307 #45 NEW cov: 12307 ft: 14626 corp: 17/123b lim: 20 exec/s: 0 rss: 71Mb L: 8/18 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:35.307 #47 NEW cov: 12310 ft: 14768 corp: 18/134b lim: 20 exec/s: 47 rss: 71Mb L: 11/18 MS: 2 EraseBytes-CMP- DE: "\364r\024\350\222\177\000\000"- 00:06:35.307 [2024-05-15 05:31:25.281166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.307 [2024-05-15 05:31:25.281196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.307 #48 NEW cov: 12310 ft: 14788 corp: 19/143b lim: 20 exec/s: 48 rss: 71Mb L: 9/18 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:06:35.566 #49 NEW cov: 12310 ft: 14843 corp: 20/149b lim: 20 exec/s: 49 rss: 72Mb L: 6/18 MS: 1 CrossOver- 00:06:35.566 #50 NEW cov: 12310 ft: 14853 corp: 21/161b lim: 20 exec/s: 50 rss: 72Mb L: 12/18 MS: 1 ChangeBinInt- 00:06:35.566 #51 NEW cov: 12310 ft: 14870 corp: 22/174b lim: 20 exec/s: 51 rss: 72Mb L: 13/18 MS: 1 PersAutoDict- DE: "\364r\024\350\222\177\000\000"- 00:06:35.566 #52 NEW cov: 12310 ft: 14883 corp: 23/179b lim: 20 exec/s: 52 rss: 72Mb L: 5/18 MS: 1 ChangeByte- 00:06:35.566 #53 NEW cov: 12310 ft: 14901 corp: 24/185b lim: 20 exec/s: 53 rss: 72Mb L: 6/18 MS: 1 InsertByte- 00:06:35.566 #54 NEW cov: 12310 ft: 14911 corp: 25/191b lim: 20 exec/s: 54 rss: 72Mb L: 6/18 MS: 1 EraseBytes- 00:06:35.825 #55 NEW cov: 12310 ft: 14925 corp: 26/197b lim: 20 exec/s: 55 rss: 72Mb L: 6/18 MS: 1 ChangeByte- 00:06:35.825 [2024-05-15 05:31:25.632365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.825 [2024-05-15 05:31:25.632397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.826 #56 NEW cov: 12310 ft: 15023 corp: 27/209b lim: 20 exec/s: 56 rss: 72Mb L: 12/18 MS: 1 CMP- DE: "\000\000\000\000\000\000\000H"- 00:06:35.826 #57 NEW cov: 12310 ft: 15035 corp: 28/213b lim: 20 exec/s: 57 rss: 72Mb L: 4/18 MS: 1 ChangeBit- 00:06:35.826 #58 NEW cov: 12310 ft: 15042 corp: 29/219b lim: 20 exec/s: 58 rss: 72Mb L: 6/18 MS: 1 ChangeBinInt- 00:06:35.826 #59 NEW cov: 12310 ft: 15061 corp: 30/225b lim: 20 exec/s: 59 rss: 72Mb L: 6/18 MS: 1 ChangeByte- 00:06:35.826 [2024-05-15 05:31:25.792621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.826 [2024-05-15 05:31:25.792648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.826 #60 NEW cov: 12310 ft: 15068 corp: 31/233b lim: 20 exec/s: 60 rss: 72Mb L: 8/18 MS: 1 CopyPart- 00:06:35.826 [2024-05-15 05:31:25.832987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.826 [2024-05-15 05:31:25.833012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.084 #61 NEW cov: 12310 ft: 15121 corp: 32/251b lim: 20 exec/s: 61 rss: 72Mb L: 18/18 MS: 1 CMP- DE: "\377\377\377\015"- 00:06:36.084 [2024-05-15 05:31:25.883067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:36.084 [2024-05-15 05:31:25.883093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.084 #62 NEW cov: 12310 ft: 15125 corp: 33/265b lim: 20 exec/s: 62 rss: 72Mb L: 14/18 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000H"- 00:06:36.084 #63 NEW cov: 12310 ft: 15127 corp: 34/274b lim: 20 exec/s: 63 rss: 73Mb L: 9/18 MS: 1 CopyPart- 00:06:36.084 #64 NEW cov: 12310 ft: 15135 corp: 35/280b lim: 20 exec/s: 64 rss: 73Mb L: 6/18 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:06:36.084 #66 NEW cov: 12310 ft: 15139 corp: 36/284b lim: 20 exec/s: 66 rss: 73Mb L: 4/18 MS: 2 EraseBytes-InsertByte- 00:06:36.084 #67 NEW cov: 12310 ft: 15159 corp: 37/296b lim: 20 exec/s: 67 rss: 73Mb L: 12/18 MS: 1 ShuffleBytes- 00:06:36.346 #68 NEW cov: 12310 ft: 15166 corp: 38/300b lim: 20 exec/s: 68 rss: 73Mb L: 4/18 MS: 1 EraseBytes- 00:06:36.346 #69 NEW cov: 12310 ft: 15183 corp: 39/306b lim: 20 exec/s: 69 rss: 73Mb L: 6/18 MS: 1 InsertByte- 00:06:36.346 #75 NEW cov: 12310 ft: 15203 corp: 40/322b lim: 20 exec/s: 75 rss: 73Mb L: 16/18 MS: 1 PersAutoDict- DE: "\377\377\377\015"- 00:06:36.346 [2024-05-15 05:31:26.223930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:36.346 [2024-05-15 05:31:26.223958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.346 #76 NEW cov: 12310 ft: 15222 corp: 41/335b lim: 20 exec/s: 76 rss: 73Mb L: 13/18 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000H"- 00:06:36.346 #77 NEW cov: 12310 ft: 15233 corp: 42/342b lim: 20 exec/s: 38 rss: 73Mb L: 7/18 MS: 1 InsertByte- 00:06:36.346 #77 DONE cov: 12310 ft: 15233 corp: 42/342b lim: 20 exec/s: 38 rss: 73Mb 00:06:36.346 ###### Recommended dictionary. ###### 00:06:36.346 "~\000\000\000" # Uses: 5 00:06:36.346 "\364r\024\350\222\177\000\000" # Uses: 1 00:06:36.346 "\000\000\000\000\000\000\000H" # Uses: 2 00:06:36.346 "\377\377\377\015" # Uses: 1 00:06:36.346 ###### End of recommended dictionary. ###### 00:06:36.346 Done 77 runs in 2 second(s) 00:06:36.346 [2024-05-15 05:31:26.292745] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.641 05:31:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:36.641 [2024-05-15 05:31:26.461644] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:36.641 [2024-05-15 05:31:26.461715] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267876 ] 00:06:36.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.641 [2024-05-15 05:31:26.646001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.899 [2024-05-15 05:31:26.712990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.899 [2024-05-15 05:31:26.772645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.899 [2024-05-15 05:31:26.788599] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:36.899 [2024-05-15 05:31:26.789018] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:36.899 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.899 INFO: Seed: 2193320205 00:06:36.899 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:36.899 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:36.900 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:36.900 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.900 #2 INITED exec/s: 0 rss: 64Mb 00:06:36.900 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.900 This may also happen if the target rejected all inputs we tried so far 00:06:36.900 [2024-05-15 05:31:26.855072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.900 [2024-05-15 05:31:26.855108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.158 NEW_FUNC[1/684]: 0x487e90 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:37.158 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.158 #10 NEW cov: 11779 ft: 11780 corp: 2/8b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:06:37.416 [2024-05-15 05:31:27.186189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dfdf0edf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.186237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.416 [2024-05-15 05:31:27.186365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.186393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.416 NEW_FUNC[1/2]: 0xfc65e0 in posix_sock_read /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1503 00:06:37.416 NEW_FUNC[2/2]: 0x1ed14a0 in spdk_pipe_writer_get_buffer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/util/pipe.c:92 00:06:37.416 #14 NEW cov: 11960 ft: 13171 corp: 3/24b lim: 35 exec/s: 0 rss: 70Mb L: 16/16 MS: 4 ShuffleBytes-InsertByte-ChangeBit-InsertRepeatedBytes- 00:06:37.416 [2024-05-15 05:31:27.235834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.235860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.416 #15 NEW cov: 11966 ft: 13382 corp: 4/31b lim: 35 exec/s: 0 rss: 70Mb L: 7/16 MS: 1 CopyPart- 00:06:37.416 [2024-05-15 05:31:27.286023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.286049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.416 #16 NEW cov: 12051 ft: 13683 corp: 5/38b lim: 35 exec/s: 0 rss: 70Mb L: 7/16 MS: 1 CopyPart- 00:06:37.416 [2024-05-15 05:31:27.326193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.326219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.416 #17 NEW cov: 12051 ft: 13791 corp: 6/45b lim: 35 exec/s: 0 rss: 71Mb L: 7/16 MS: 1 ShuffleBytes- 00:06:37.416 [2024-05-15 05:31:27.376139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.376167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.416 #18 NEW cov: 12051 ft: 13877 corp: 7/52b lim: 35 exec/s: 0 rss: 71Mb L: 7/16 MS: 1 CrossOver- 00:06:37.416 [2024-05-15 05:31:27.426448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:07000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.416 [2024-05-15 05:31:27.426476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.675 #19 NEW cov: 12051 ft: 14046 corp: 8/59b lim: 35 exec/s: 0 rss: 71Mb L: 7/16 MS: 1 ChangeBinInt- 00:06:37.675 [2024-05-15 05:31:27.466870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dfdf0edf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.466896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.675 [2024-05-15 05:31:27.467028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.467045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.675 #20 NEW cov: 12051 ft: 14082 corp: 9/75b lim: 35 exec/s: 0 rss: 71Mb L: 16/16 MS: 1 CrossOver- 00:06:37.675 [2024-05-15 05:31:27.517078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:07003277 cdw11:270e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.517104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.675 [2024-05-15 05:31:27.517233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.517250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.675 #25 NEW cov: 12051 ft: 14134 corp: 10/95b lim: 35 exec/s: 0 rss: 71Mb L: 20/20 MS: 5 EraseBytes-InsertByte-ChangeBit-ChangeByte-CrossOver- 00:06:37.675 [2024-05-15 05:31:27.566931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:009c0025 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.566958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.675 #26 NEW cov: 12051 ft: 14224 corp: 11/102b lim: 35 exec/s: 0 rss: 71Mb L: 7/20 MS: 1 ShuffleBytes- 00:06:37.675 [2024-05-15 05:31:27.606916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.606943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.675 #27 NEW cov: 12051 ft: 14281 corp: 12/110b lim: 35 exec/s: 0 rss: 71Mb L: 8/20 MS: 1 InsertByte- 00:06:37.675 [2024-05-15 05:31:27.647141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.647169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.675 #28 NEW cov: 12051 ft: 14318 corp: 13/118b lim: 35 exec/s: 0 rss: 71Mb L: 8/20 MS: 1 CrossOver- 00:06:37.675 [2024-05-15 05:31:27.687102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.675 [2024-05-15 05:31:27.687129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.933 #29 NEW cov: 12051 ft: 14391 corp: 14/125b lim: 35 exec/s: 0 rss: 72Mb L: 7/20 MS: 1 ChangeBit- 00:06:37.933 [2024-05-15 05:31:27.737942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00f90000 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.737972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.933 [2024-05-15 05:31:27.738093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.738125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.933 [2024-05-15 05:31:27.738237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.738254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.933 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:37.933 #33 NEW cov: 12074 ft: 14689 corp: 15/148b lim: 35 exec/s: 0 rss: 72Mb L: 23/23 MS: 4 EraseBytes-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:06:37.933 [2024-05-15 05:31:27.787770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00050025 cdw11:77770002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.787794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.933 [2024-05-15 05:31:27.787907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:77777777 cdw11:77770002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.787923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.933 #37 NEW cov: 12074 ft: 14704 corp: 16/165b lim: 35 exec/s: 0 rss: 72Mb L: 17/23 MS: 4 EraseBytes-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:06:37.933 [2024-05-15 05:31:27.837701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.837726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.933 #38 NEW cov: 12074 ft: 14718 corp: 17/172b lim: 35 exec/s: 38 rss: 72Mb L: 7/23 MS: 1 CopyPart- 00:06:37.933 [2024-05-15 05:31:27.877837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00d60001 cdw11:d6d60003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.877863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.933 #41 NEW cov: 12074 ft: 14720 corp: 18/183b lim: 35 exec/s: 41 rss: 72Mb L: 11/23 MS: 3 EraseBytes-ChangeByte-InsertRepeatedBytes- 00:06:37.933 [2024-05-15 05:31:27.928207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0edf0000 cdw11:00df0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.928233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.933 [2024-05-15 05:31:27.928345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.933 [2024-05-15 05:31:27.928360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.933 #43 NEW cov: 12074 ft: 14729 corp: 19/202b lim: 35 exec/s: 43 rss: 72Mb L: 19/23 MS: 2 CrossOver-CrossOver- 00:06:38.191 [2024-05-15 05:31:27.968189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dfdf0edf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:27.968217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:27.968342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:27.968359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.191 #44 NEW cov: 12074 ft: 14767 corp: 20/218b lim: 35 exec/s: 44 rss: 72Mb L: 16/23 MS: 1 ShuffleBytes- 00:06:38.191 [2024-05-15 05:31:28.018905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:ebeb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.018931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.019044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00f90000 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.019061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.019181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.019199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.019318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.019334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.191 #45 NEW cov: 12074 ft: 15134 corp: 21/248b lim: 35 exec/s: 45 rss: 72Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:38.191 [2024-05-15 05:31:28.068891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.068918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.069038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.069053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.069175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.069189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.191 #47 NEW cov: 12074 ft: 15143 corp: 22/275b lim: 35 exec/s: 47 rss: 72Mb L: 27/30 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:38.191 [2024-05-15 05:31:28.108751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00050025 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.108778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.108897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:77770002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.108913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.191 #48 NEW cov: 12074 ft: 15154 corp: 23/290b lim: 35 exec/s: 48 rss: 72Mb L: 15/30 MS: 1 CrossOver- 00:06:38.191 [2024-05-15 05:31:28.158910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:32410000 cdw11:00f00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.158937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.159060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.159077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.191 #51 NEW cov: 12074 ft: 15182 corp: 24/307b lim: 35 exec/s: 51 rss: 72Mb L: 17/30 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:38.191 [2024-05-15 05:31:28.199550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dfdf0edf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.199576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.199689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dfffdfdf cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.199704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.199816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.199833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.191 [2024-05-15 05:31:28.199945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:dfdfffff cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.191 [2024-05-15 05:31:28.199960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.450 #52 NEW cov: 12074 ft: 15201 corp: 25/336b lim: 35 exec/s: 52 rss: 72Mb L: 29/30 MS: 1 InsertRepeatedBytes- 00:06:38.450 [2024-05-15 05:31:28.239390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00050025 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.239417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.239549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0005f9f9 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.239567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.239685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:77770002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.239701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.450 #53 NEW cov: 12074 ft: 15213 corp: 26/358b lim: 35 exec/s: 53 rss: 72Mb L: 22/30 MS: 1 CrossOver- 00:06:38.450 [2024-05-15 05:31:28.289678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:dada0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.289704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.289842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:eb00ebeb cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.289860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.289975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.289996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.290109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.290127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.450 #54 NEW cov: 12074 ft: 15229 corp: 27/391b lim: 35 exec/s: 54 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:38.450 [2024-05-15 05:31:28.338719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00004000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.338748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.450 #55 NEW cov: 12074 ft: 15269 corp: 28/398b lim: 35 exec/s: 55 rss: 73Mb L: 7/33 MS: 1 ChangeBit- 00:06:38.450 [2024-05-15 05:31:28.379214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01002500 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.379240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.450 #56 NEW cov: 12074 ft: 15290 corp: 29/406b lim: 35 exec/s: 56 rss: 73Mb L: 8/33 MS: 1 InsertByte- 00:06:38.450 [2024-05-15 05:31:28.419812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.419840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.419960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f900f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.419976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.420099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:fff90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.420116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.450 #59 NEW cov: 12074 ft: 15301 corp: 30/433b lim: 35 exec/s: 59 rss: 73Mb L: 27/33 MS: 3 EraseBytes-ChangeBinInt-CrossOver- 00:06:38.450 [2024-05-15 05:31:28.460124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00f90000 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.460150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.460273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.460288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.450 [2024-05-15 05:31:28.460407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.450 [2024-05-15 05:31:28.460424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.708 #60 NEW cov: 12074 ft: 15328 corp: 31/456b lim: 35 exec/s: 60 rss: 73Mb L: 23/33 MS: 1 ChangeBit- 00:06:38.708 [2024-05-15 05:31:28.500439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:dada0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.500463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.500578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:eb00ebeb cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.500595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.500717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:21f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.500731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.500856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.500871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.708 #61 NEW cov: 12074 ft: 15339 corp: 32/489b lim: 35 exec/s: 61 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:06:38.708 [2024-05-15 05:31:28.549829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fcff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.549855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.708 #62 NEW cov: 12074 ft: 15352 corp: 33/496b lim: 35 exec/s: 62 rss: 73Mb L: 7/33 MS: 1 ChangeBinInt- 00:06:38.708 [2024-05-15 05:31:28.590509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:dfdf0edf cdw11:dfdf0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.590536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.590648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0000dfdf cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.590664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.590772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.590788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.708 #63 NEW cov: 12074 ft: 15363 corp: 34/521b lim: 35 exec/s: 63 rss: 73Mb L: 25/33 MS: 1 CopyPart- 00:06:38.708 [2024-05-15 05:31:28.630919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ebebebeb cdw11:dada0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.630946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.631058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:eb00ebeb cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.631074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.631191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.631207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.631328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0606f90d cdw11:06f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.631346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.708 #64 NEW cov: 12074 ft: 15385 corp: 35/554b lim: 35 exec/s: 64 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:06:38.708 [2024-05-15 05:31:28.670476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:32410000 cdw11:00f00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.670502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.670619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f0f0f0f0 cdw11:f0f00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.670636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.708 #65 NEW cov: 12074 ft: 15404 corp: 36/571b lim: 35 exec/s: 65 rss: 73Mb L: 17/33 MS: 1 ShuffleBytes- 00:06:38.708 [2024-05-15 05:31:28.720935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:32410000 cdw11:00f00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.720960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.721093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00f00003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.721110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.708 [2024-05-15 05:31:28.721225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9c25f0f0 cdw11:f0f00000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.708 [2024-05-15 05:31:28.721242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.064 #66 NEW cov: 12074 ft: 15486 corp: 37/594b lim: 35 exec/s: 66 rss: 73Mb L: 23/33 MS: 1 CrossOver- 00:06:39.064 [2024-05-15 05:31:28.770481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.064 [2024-05-15 05:31:28.770509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.064 #67 NEW cov: 12074 ft: 15510 corp: 38/602b lim: 35 exec/s: 67 rss: 73Mb L: 8/33 MS: 1 ShuffleBytes- 00:06:39.064 [2024-05-15 05:31:28.820635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:009c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.064 [2024-05-15 05:31:28.820663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.064 #68 NEW cov: 12074 ft: 15545 corp: 39/610b lim: 35 exec/s: 34 rss: 73Mb L: 8/33 MS: 1 ShuffleBytes- 00:06:39.064 #68 DONE cov: 12074 ft: 15545 corp: 39/610b lim: 35 exec/s: 34 rss: 73Mb 00:06:39.064 Done 68 runs in 2 second(s) 00:06:39.064 [2024-05-15 05:31:28.850309] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.064 05:31:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:39.064 [2024-05-15 05:31:29.015313] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:39.064 [2024-05-15 05:31:29.015393] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268401 ] 00:06:39.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.320 [2024-05-15 05:31:29.192871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.320 [2024-05-15 05:31:29.257957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.320 [2024-05-15 05:31:29.317098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.320 [2024-05-15 05:31:29.333043] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:39.320 [2024-05-15 05:31:29.333415] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:39.578 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.578 INFO: Seed: 441355692 00:06:39.578 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:39.578 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:39.578 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:39.578 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.578 #2 INITED exec/s: 0 rss: 64Mb 00:06:39.578 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.578 This may also happen if the target rejected all inputs we tried so far 00:06:39.578 [2024-05-15 05:31:29.378663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.578 [2024-05-15 05:31:29.378692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.834 NEW_FUNC[1/686]: 0x48a020 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:39.834 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.834 #6 NEW cov: 11841 ft: 11838 corp: 2/11b lim: 45 exec/s: 0 rss: 70Mb L: 10/10 MS: 4 ChangeByte-ChangeBit-CopyPart-CMP- DE: "\001\205\316\272\333\012G2"- 00:06:39.834 [2024-05-15 05:31:29.720347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.834 [2024-05-15 05:31:29.720405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.834 #7 NEW cov: 11971 ft: 12565 corp: 3/21b lim: 45 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:06:39.834 [2024-05-15 05:31:29.770304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.834 [2024-05-15 05:31:29.770333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.834 #8 NEW cov: 11977 ft: 12737 corp: 4/31b lim: 45 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 PersAutoDict- DE: "\001\205\316\272\333\012G2"- 00:06:39.834 [2024-05-15 05:31:29.820058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:39.834 [2024-05-15 05:31:29.820089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.834 #9 NEW cov: 12062 ft: 12961 corp: 5/44b lim: 45 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:06:40.092 [2024-05-15 05:31:29.870343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:85ce0a01 cdw11:badb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:29.870371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.092 #10 NEW cov: 12062 ft: 13061 corp: 6/53b lim: 45 exec/s: 0 rss: 70Mb L: 9/13 MS: 1 PersAutoDict- DE: "\001\205\316\272\333\012G2"- 00:06:40.092 [2024-05-15 05:31:29.910697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:29.910729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.092 #11 NEW cov: 12062 ft: 13102 corp: 7/66b lim: 45 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:06:40.092 [2024-05-15 05:31:29.960788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:0a0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:29.960816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.092 #12 NEW cov: 12062 ft: 13221 corp: 8/76b lim: 45 exec/s: 0 rss: 71Mb L: 10/13 MS: 1 CrossOver- 00:06:40.092 [2024-05-15 05:31:30.000989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:30.001018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.092 #13 NEW cov: 12062 ft: 13254 corp: 9/87b lim: 45 exec/s: 0 rss: 71Mb L: 11/13 MS: 1 InsertByte- 00:06:40.092 [2024-05-15 05:31:30.051499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:30.051527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.092 [2024-05-15 05:31:30.051648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:85ce2201 cdw11:badb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:30.051667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.092 #14 NEW cov: 12062 ft: 14079 corp: 10/108b lim: 45 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 PersAutoDict- DE: "\001\205\316\272\333\012G2"- 00:06:40.092 [2024-05-15 05:31:30.101224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:30.101252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.092 [2024-05-15 05:31:30.101393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:badb85ce cdw11:0a470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.092 [2024-05-15 05:31:30.101411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.350 #15 NEW cov: 12062 ft: 14107 corp: 11/126b lim: 45 exec/s: 0 rss: 71Mb L: 18/21 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.350 [2024-05-15 05:31:30.141520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.141548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.350 #16 NEW cov: 12062 ft: 14213 corp: 12/136b lim: 45 exec/s: 0 rss: 71Mb L: 10/21 MS: 1 ChangeBinInt- 00:06:40.350 [2024-05-15 05:31:30.181451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff840185 cdw11:cebb0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.181490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.350 [2024-05-15 05:31:30.181608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:badbacce cdw11:0a470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.181625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.350 #17 NEW cov: 12062 ft: 14228 corp: 13/154b lim: 45 exec/s: 0 rss: 71Mb L: 18/21 MS: 1 CMP- DE: "\377\204\316\273Q;9\254"- 00:06:40.350 [2024-05-15 05:31:30.221755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ce840185 cdw11:ff510001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.221781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.350 [2024-05-15 05:31:30.221894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:badbacce cdw11:0a470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.221910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.350 #18 NEW cov: 12062 ft: 14256 corp: 14/172b lim: 45 exec/s: 0 rss: 71Mb L: 18/21 MS: 1 ShuffleBytes- 00:06:40.350 [2024-05-15 05:31:30.271826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.271852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.350 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:40.350 #19 NEW cov: 12085 ft: 14292 corp: 15/185b lim: 45 exec/s: 0 rss: 71Mb L: 13/21 MS: 1 CopyPart- 00:06:40.350 [2024-05-15 05:31:30.311670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:0a0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.311698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.350 #20 NEW cov: 12085 ft: 14333 corp: 16/195b lim: 45 exec/s: 0 rss: 71Mb L: 10/21 MS: 1 ChangeASCIIInt- 00:06:40.350 [2024-05-15 05:31:30.361653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.350 [2024-05-15 05:31:30.361679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 #21 NEW cov: 12085 ft: 14409 corp: 17/208b lim: 45 exec/s: 21 rss: 71Mb L: 13/21 MS: 1 CopyPart- 00:06:40.609 [2024-05-15 05:31:30.402055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.402084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 #22 NEW cov: 12085 ft: 14472 corp: 18/223b lim: 45 exec/s: 22 rss: 71Mb L: 15/21 MS: 1 EraseBytes- 00:06:40.609 [2024-05-15 05:31:30.452581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.452607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 [2024-05-15 05:31:30.452731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0a47badb cdw11:22220001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.452746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.609 #23 NEW cov: 12085 ft: 14508 corp: 19/248b lim: 45 exec/s: 23 rss: 71Mb L: 25/25 MS: 1 CopyPart- 00:06:40.609 [2024-05-15 05:31:30.492389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.492414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 #24 NEW cov: 12085 ft: 14515 corp: 20/261b lim: 45 exec/s: 24 rss: 71Mb L: 13/25 MS: 1 ChangeASCIIInt- 00:06:40.609 [2024-05-15 05:31:30.532541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:85ce01a6 cdw11:badb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.532565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 #25 NEW cov: 12085 ft: 14528 corp: 21/272b lim: 45 exec/s: 25 rss: 71Mb L: 11/25 MS: 1 InsertByte- 00:06:40.609 [2024-05-15 05:31:30.573198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.573224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 [2024-05-15 05:31:30.573339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:50505050 cdw11:50500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.573355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.609 [2024-05-15 05:31:30.573472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:50505050 cdw11:badb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.573489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.609 #26 NEW cov: 12085 ft: 14774 corp: 22/301b lim: 45 exec/s: 26 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:40.609 [2024-05-15 05:31:30.623103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff845685 cdw11:cebb0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.623128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.609 [2024-05-15 05:31:30.623250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:badbacce cdw11:0a470001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.609 [2024-05-15 05:31:30.623265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.868 #27 NEW cov: 12085 ft: 14783 corp: 23/319b lim: 45 exec/s: 27 rss: 72Mb L: 18/29 MS: 1 ChangeByte- 00:06:40.868 [2024-05-15 05:31:30.663425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.868 [2024-05-15 05:31:30.663455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.868 [2024-05-15 05:31:30.663570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:50505050 cdw11:50500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.868 [2024-05-15 05:31:30.663586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.868 [2024-05-15 05:31:30.663708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:50505050 cdw11:bada0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.868 [2024-05-15 05:31:30.663724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.868 #28 NEW cov: 12085 ft: 14806 corp: 24/348b lim: 45 exec/s: 28 rss: 72Mb L: 29/29 MS: 1 ChangeBit- 00:06:40.868 [2024-05-15 05:31:30.713087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ce840185 cdw11:ff510001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.868 [2024-05-15 05:31:30.713112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.868 #29 NEW cov: 12085 ft: 14884 corp: 25/365b lim: 45 exec/s: 29 rss: 72Mb L: 17/29 MS: 1 EraseBytes- 00:06:40.868 [2024-05-15 05:31:30.763460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:85ce0a01 cdw11:84ff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.868 [2024-05-15 05:31:30.763486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.869 [2024-05-15 05:31:30.763595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ceba39ac cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.869 [2024-05-15 05:31:30.763611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.869 #30 NEW cov: 12085 ft: 14954 corp: 26/384b lim: 45 exec/s: 30 rss: 72Mb L: 19/29 MS: 1 CrossOver- 00:06:40.869 [2024-05-15 05:31:30.803311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.869 [2024-05-15 05:31:30.803336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.869 #31 NEW cov: 12085 ft: 14959 corp: 27/395b lim: 45 exec/s: 31 rss: 72Mb L: 11/29 MS: 1 EraseBytes- 00:06:40.869 [2024-05-15 05:31:30.853470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:0a0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.869 [2024-05-15 05:31:30.853496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.869 #32 NEW cov: 12085 ft: 14967 corp: 28/405b lim: 45 exec/s: 32 rss: 72Mb L: 10/29 MS: 1 ChangeBit- 00:06:41.128 [2024-05-15 05:31:30.903452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:30.903478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.128 #38 NEW cov: 12085 ft: 15010 corp: 29/414b lim: 45 exec/s: 38 rss: 72Mb L: 9/29 MS: 1 EraseBytes- 00:06:41.128 [2024-05-15 05:31:30.953368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:30.953398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.128 #39 NEW cov: 12085 ft: 15043 corp: 30/423b lim: 45 exec/s: 39 rss: 72Mb L: 9/29 MS: 1 EraseBytes- 00:06:41.128 [2024-05-15 05:31:31.003856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:7a31015a cdw11:3edb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:31.003887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.128 #40 NEW cov: 12085 ft: 15069 corp: 31/434b lim: 45 exec/s: 40 rss: 72Mb L: 11/29 MS: 1 ChangeBinInt- 00:06:41.128 [2024-05-15 05:31:31.053905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:db0a0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:31.053935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.128 [2024-05-15 05:31:31.054056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:01852222 cdw11:ceba0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:31.054073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.128 #41 NEW cov: 12085 ft: 15076 corp: 32/456b lim: 45 exec/s: 41 rss: 73Mb L: 22/29 MS: 1 InsertByte- 00:06:41.128 [2024-05-15 05:31:31.104180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:31.104207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.128 #42 NEW cov: 12085 ft: 15150 corp: 33/470b lim: 45 exec/s: 42 rss: 73Mb L: 14/29 MS: 1 EraseBytes- 00:06:41.128 [2024-05-15 05:31:31.144566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:0aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:31.144592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.128 [2024-05-15 05:31:31.144723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.128 [2024-05-15 05:31:31.144741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.386 #43 NEW cov: 12085 ft: 15175 corp: 34/491b lim: 45 exec/s: 43 rss: 73Mb L: 21/29 MS: 1 InsertRepeatedBytes- 00:06:41.386 [2024-05-15 05:31:31.194994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.386 [2024-05-15 05:31:31.195020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.386 [2024-05-15 05:31:31.195159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:50505050 cdw11:50500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.386 [2024-05-15 05:31:31.195177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.386 [2024-05-15 05:31:31.195290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:afaf50b0 cdw11:48db0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.386 [2024-05-15 05:31:31.195308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.386 #44 NEW cov: 12085 ft: 15194 corp: 35/520b lim: 45 exec/s: 44 rss: 73Mb L: 29/29 MS: 1 ChangeBinInt- 00:06:41.386 [2024-05-15 05:31:31.234627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01005601 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.387 [2024-05-15 05:31:31.234655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.387 #45 NEW cov: 12085 ft: 15223 corp: 36/530b lim: 45 exec/s: 45 rss: 73Mb L: 10/29 MS: 1 InsertByte- 00:06:41.387 [2024-05-15 05:31:31.285214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.387 [2024-05-15 05:31:31.285245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.387 [2024-05-15 05:31:31.285386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:50505050 cdw11:50500002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.387 [2024-05-15 05:31:31.285405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.387 [2024-05-15 05:31:31.285512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:50505050 cdw11:badb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.387 [2024-05-15 05:31:31.285527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.387 #46 NEW cov: 12085 ft: 15260 corp: 37/559b lim: 45 exec/s: 46 rss: 73Mb L: 29/29 MS: 1 ChangeASCIIInt- 00:06:41.387 [2024-05-15 05:31:31.324600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ceba0185 cdw11:0aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.387 [2024-05-15 05:31:31.324627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.387 #47 NEW cov: 12085 ft: 15290 corp: 38/574b lim: 45 exec/s: 47 rss: 73Mb L: 15/29 MS: 1 EraseBytes- 00:06:41.387 [2024-05-15 05:31:31.374917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:85ce01a6 cdw11:badb0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.387 [2024-05-15 05:31:31.374943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.387 #48 NEW cov: 12085 ft: 15293 corp: 39/585b lim: 45 exec/s: 24 rss: 73Mb L: 11/29 MS: 1 ChangeBinInt- 00:06:41.387 #48 DONE cov: 12085 ft: 15293 corp: 39/585b lim: 45 exec/s: 24 rss: 73Mb 00:06:41.387 ###### Recommended dictionary. ###### 00:06:41.387 "\001\205\316\272\333\012G2" # Uses: 3 00:06:41.387 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:41.387 "\377\204\316\273Q;9\254" # Uses: 0 00:06:41.387 ###### End of recommended dictionary. ###### 00:06:41.387 Done 48 runs in 2 second(s) 00:06:41.387 [2024-05-15 05:31:31.395906] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:41.645 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.645 05:31:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.645 05:31:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.646 05:31:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:41.646 [2024-05-15 05:31:31.565085] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:41.646 [2024-05-15 05:31:31.565155] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268936 ] 00:06:41.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.904 [2024-05-15 05:31:31.744392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.905 [2024-05-15 05:31:31.809556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.905 [2024-05-15 05:31:31.868385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.905 [2024-05-15 05:31:31.884334] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:41.905 [2024-05-15 05:31:31.884698] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:41.905 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.905 INFO: Seed: 2992348221 00:06:41.905 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:41.905 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:41.905 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:41.905 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.905 #2 INITED exec/s: 0 rss: 63Mb 00:06:41.905 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.905 This may also happen if the target rejected all inputs we tried so far 00:06:42.163 [2024-05-15 05:31:31.933759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:42.163 [2024-05-15 05:31:31.933788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.421 NEW_FUNC[1/683]: 0x48c830 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:42.421 NEW_FUNC[2/683]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.421 #6 NEW cov: 11740 ft: 11759 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 4 ChangeBinInt-ShuffleBytes-ChangeByte-CrossOver- 00:06:42.421 [2024-05-15 05:31:32.264962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.421 [2024-05-15 05:31:32.264994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.421 [2024-05-15 05:31:32.265047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.421 [2024-05-15 05:31:32.265061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.421 [2024-05-15 05:31:32.265114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.421 [2024-05-15 05:31:32.265127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.421 [2024-05-15 05:31:32.265184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:42.421 [2024-05-15 05:31:32.265197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.421 NEW_FUNC[1/1]: 0x12e9f60 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:06:42.422 #7 NEW cov: 11888 ft: 12725 corp: 3/11b lim: 10 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:06:42.422 [2024-05-15 05:31:32.314819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.314845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.422 [2024-05-15 05:31:32.314898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.314912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.422 #8 NEW cov: 11894 ft: 13235 corp: 4/16b lim: 10 exec/s: 0 rss: 70Mb L: 5/8 MS: 1 InsertRepeatedBytes- 00:06:42.422 [2024-05-15 05:31:32.354782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024ff cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.354808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.422 #10 NEW cov: 11979 ft: 13474 corp: 5/18b lim: 10 exec/s: 0 rss: 70Mb L: 2/8 MS: 2 ChangeByte-InsertByte- 00:06:42.422 [2024-05-15 05:31:32.395069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.395095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.422 [2024-05-15 05:31:32.395150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.395164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.422 #11 NEW cov: 11979 ft: 13569 corp: 6/22b lim: 10 exec/s: 0 rss: 70Mb L: 4/8 MS: 1 CopyPart- 00:06:42.422 [2024-05-15 05:31:32.435278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.435304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.422 [2024-05-15 05:31:32.435359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.435374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.422 [2024-05-15 05:31:32.435437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.422 [2024-05-15 05:31:32.435451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.681 #12 NEW cov: 11979 ft: 13739 corp: 7/29b lim: 10 exec/s: 0 rss: 70Mb L: 7/8 MS: 1 EraseBytes- 00:06:42.681 [2024-05-15 05:31:32.485554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.485581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.485635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.485650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.485703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.485717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.485773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000d30a cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.485788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.681 #13 NEW cov: 11979 ft: 13824 corp: 8/37b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 ChangeBinInt- 00:06:42.681 [2024-05-15 05:31:32.525673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.525700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.525756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.525770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.525825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002c cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.525840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.525895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.525909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.681 #14 NEW cov: 11979 ft: 13864 corp: 9/45b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CopyPart- 00:06:42.681 [2024-05-15 05:31:32.565429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c0e cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.565456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.681 #15 NEW cov: 11979 ft: 13990 corp: 10/47b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeBit- 00:06:42.681 [2024-05-15 05:31:32.605561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002cec cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.605588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.681 #16 NEW cov: 11979 ft: 14030 corp: 11/49b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeBinInt- 00:06:42.681 [2024-05-15 05:31:32.655873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024ff cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.655900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.655955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000abab cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.655969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.681 [2024-05-15 05:31:32.656026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000abab cdw11:00000000 00:06:42.681 [2024-05-15 05:31:32.656041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.681 #17 NEW cov: 11979 ft: 14058 corp: 12/56b lim: 10 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 InsertRepeatedBytes- 00:06:42.939 [2024-05-15 05:31:32.706075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.706102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.706157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002c cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.706174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.706228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.706243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.940 #18 NEW cov: 11979 ft: 14086 corp: 13/63b lim: 10 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 CrossOver- 00:06:42.940 [2024-05-15 05:31:32.756190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024df cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.756217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.756273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000abab cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.756287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.756343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000abab cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.756357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.940 #19 NEW cov: 11979 ft: 14119 corp: 14/70b lim: 10 exec/s: 0 rss: 71Mb L: 7/8 MS: 1 ChangeBit- 00:06:42.940 [2024-05-15 05:31:32.806431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.806459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.806514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000f6 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.806527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.806582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.806596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.806650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000d30a cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.806664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.940 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:42.940 #20 NEW cov: 12002 ft: 14153 corp: 15/78b lim: 10 exec/s: 0 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:06:42.940 [2024-05-15 05:31:32.846312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024ff cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.846338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.846395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.846410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.940 #21 NEW cov: 12002 ft: 14170 corp: 16/83b lim: 10 exec/s: 0 rss: 71Mb L: 5/8 MS: 1 InsertRepeatedBytes- 00:06:42.940 [2024-05-15 05:31:32.886275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000249c cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.886301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.940 #22 NEW cov: 12002 ft: 14198 corp: 17/85b lim: 10 exec/s: 0 rss: 71Mb L: 2/8 MS: 1 ChangeByte- 00:06:42.940 [2024-05-15 05:31:32.926684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.926710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.926765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.926779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.940 [2024-05-15 05:31:32.926834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002000 cdw11:00000000 00:06:42.940 [2024-05-15 05:31:32.926848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.940 #23 NEW cov: 12002 ft: 14239 corp: 18/92b lim: 10 exec/s: 23 rss: 71Mb L: 7/8 MS: 1 ChangeBit- 00:06:43.199 [2024-05-15 05:31:32.966530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002d0e cdw11:00000000 00:06:43.199 [2024-05-15 05:31:32.966556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 #24 NEW cov: 12002 ft: 14247 corp: 19/94b lim: 10 exec/s: 24 rss: 71Mb L: 2/8 MS: 1 ChangeBit- 00:06:43.199 [2024-05-15 05:31:33.006766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024df cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.006793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.006846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000ab cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.006860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.199 #25 NEW cov: 12002 ft: 14279 corp: 20/98b lim: 10 exec/s: 25 rss: 71Mb L: 4/8 MS: 1 CrossOver- 00:06:43.199 [2024-05-15 05:31:33.057132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c00 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.057158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.057212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.057226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.057280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002000 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.057293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.057348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002c0e cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.057362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.199 #26 NEW cov: 12002 ft: 14308 corp: 21/106b lim: 10 exec/s: 26 rss: 71Mb L: 8/8 MS: 1 CrossOver- 00:06:43.199 [2024-05-15 05:31:33.097139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024df cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.097167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.097222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000abab cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.097235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.097292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000abab cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.097306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.199 #27 NEW cov: 12002 ft: 14333 corp: 22/113b lim: 10 exec/s: 27 rss: 71Mb L: 7/8 MS: 1 ShuffleBytes- 00:06:43.199 [2024-05-15 05:31:33.137362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.137392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.137466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000200 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.137480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.137535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.137549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.137603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.137617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.199 #28 NEW cov: 12002 ft: 14359 corp: 23/121b lim: 10 exec/s: 28 rss: 71Mb L: 8/8 MS: 1 ChangeBit- 00:06:43.199 [2024-05-15 05:31:33.177157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000a50a cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.177182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 #29 NEW cov: 12002 ft: 14414 corp: 24/123b lim: 10 exec/s: 29 rss: 71Mb L: 2/8 MS: 1 ChangeByte- 00:06:43.199 [2024-05-15 05:31:33.217651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002400 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.217677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.217730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.217743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.217794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.217809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.199 [2024-05-15 05:31:33.217861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000df00 cdw11:00000000 00:06:43.199 [2024-05-15 05:31:33.217875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.458 #30 NEW cov: 12002 ft: 14433 corp: 25/132b lim: 10 exec/s: 30 rss: 71Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:43.458 [2024-05-15 05:31:33.267908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c00 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.267934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.267986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.268000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.268025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.268039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.268093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002000 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.268106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.268158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00002c0e cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.268172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.458 #31 NEW cov: 12002 ft: 14498 corp: 26/142b lim: 10 exec/s: 31 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:06:43.458 [2024-05-15 05:31:33.317627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000900 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.317652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.317707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.317721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.458 #32 NEW cov: 12002 ft: 14514 corp: 27/147b lim: 10 exec/s: 32 rss: 72Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:43.458 [2024-05-15 05:31:33.367652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a5b cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.367678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.458 #34 NEW cov: 12002 ft: 14521 corp: 28/149b lim: 10 exec/s: 34 rss: 72Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:06:43.458 [2024-05-15 05:31:33.407865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e0a cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.407890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.407943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002c0a cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.407956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.458 #35 NEW cov: 12002 ft: 14547 corp: 29/153b lim: 10 exec/s: 35 rss: 72Mb L: 4/10 MS: 1 ChangeBinInt- 00:06:43.458 [2024-05-15 05:31:33.458391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002400 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.458417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.458472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000005d cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.458486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.458539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.458553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.458605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000000df cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.458618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.458 [2024-05-15 05:31:33.458675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000ab cdw11:00000000 00:06:43.458 [2024-05-15 05:31:33.458688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.717 #36 NEW cov: 12002 ft: 14556 corp: 30/163b lim: 10 exec/s: 36 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:06:43.717 [2024-05-15 05:31:33.508416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002dae cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.508442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.508497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aeae cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.508511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.508568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000aeae cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.508582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.508634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ae0e cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.508648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.717 #37 NEW cov: 12002 ft: 14557 corp: 31/171b lim: 10 exec/s: 37 rss: 72Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:06:43.717 [2024-05-15 05:31:33.558438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002c00 cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.558463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.558516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.558529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.558583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.558597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.717 #38 NEW cov: 12002 ft: 14575 corp: 32/177b lim: 10 exec/s: 38 rss: 72Mb L: 6/10 MS: 1 EraseBytes- 00:06:43.717 [2024-05-15 05:31:33.608469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e0a cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.608494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.608548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ac0a cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.608562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.717 #39 NEW cov: 12002 ft: 14612 corp: 33/181b lim: 10 exec/s: 39 rss: 72Mb L: 4/10 MS: 1 ChangeBit- 00:06:43.717 [2024-05-15 05:31:33.658831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000078 cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.658857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.658910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.658923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.658976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002c00 cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.658990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.659039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002c cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.659053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.717 #40 NEW cov: 12002 ft: 14624 corp: 34/189b lim: 10 exec/s: 40 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:06:43.717 [2024-05-15 05:31:33.708878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024ff cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.708903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.708954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000abab cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.708968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.717 [2024-05-15 05:31:33.709022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000abab cdw11:00000000 00:06:43.717 [2024-05-15 05:31:33.709036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.717 #41 NEW cov: 12002 ft: 14642 corp: 35/196b lim: 10 exec/s: 41 rss: 73Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:43.977 [2024-05-15 05:31:33.748837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dc00 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.748863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.748916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.748930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.977 #42 NEW cov: 12002 ft: 14670 corp: 36/201b lim: 10 exec/s: 42 rss: 73Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:43.977 [2024-05-15 05:31:33.799205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002400 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.799231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.799284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.799298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.799353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.799367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.799393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000df00 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.799403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.977 #43 NEW cov: 12002 ft: 14695 corp: 37/210b lim: 10 exec/s: 43 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:43.977 [2024-05-15 05:31:33.839180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024df cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.839206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.839267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000aac cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.839280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.839333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ab cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.839347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.977 #44 NEW cov: 12002 ft: 14700 corp: 38/216b lim: 10 exec/s: 44 rss: 73Mb L: 6/10 MS: 1 CrossOver- 00:06:43.977 [2024-05-15 05:31:33.879512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002400 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.879537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.879590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000005d cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.879603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.879652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.879665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.879719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000000df cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.879732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.879783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000021 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.879796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:43.977 #45 NEW cov: 12002 ft: 14709 corp: 39/226b lim: 10 exec/s: 45 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:06:43.977 [2024-05-15 05:31:33.929551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000024d4 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.929576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.929629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.929643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.929694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.929708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.977 [2024-05-15 05:31:33.929761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000df00 cdw11:00000000 00:06:43.977 [2024-05-15 05:31:33.929774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.977 #46 NEW cov: 12002 ft: 14717 corp: 40/235b lim: 10 exec/s: 23 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:06:43.977 #46 DONE cov: 12002 ft: 14717 corp: 40/235b lim: 10 exec/s: 23 rss: 73Mb 00:06:43.977 Done 46 runs in 2 second(s) 00:06:43.977 [2024-05-15 05:31:33.949557] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:44.236 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:44.237 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:44.237 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:44.237 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.237 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.237 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.237 05:31:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:44.237 [2024-05-15 05:31:34.115136] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:44.237 [2024-05-15 05:31:34.115208] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269235 ] 00:06:44.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.496 [2024-05-15 05:31:34.296965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.496 [2024-05-15 05:31:34.363026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.496 [2024-05-15 05:31:34.422107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.496 [2024-05-15 05:31:34.438063] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:44.496 [2024-05-15 05:31:34.438580] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:44.496 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.496 INFO: Seed: 1252393807 00:06:44.496 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:44.496 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:44.496 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:44.496 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.496 #2 INITED exec/s: 0 rss: 64Mb 00:06:44.496 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.496 This may also happen if the target rejected all inputs we tried so far 00:06:44.496 [2024-05-15 05:31:34.514872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:44.496 [2024-05-15 05:31:34.514908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.496 [2024-05-15 05:31:34.515034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000012 cdw11:00000000 00:06:44.496 [2024-05-15 05:31:34.515052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.013 NEW_FUNC[1/684]: 0x48d220 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:45.013 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:45.013 #4 NEW cov: 11758 ft: 11759 corp: 2/5b lim: 10 exec/s: 0 rss: 70Mb L: 4/4 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:45.013 [2024-05-15 05:31:34.846237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001af9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.846279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.846393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.846412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.846525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.846543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.846651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.846669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.846777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.846793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.013 #6 NEW cov: 11888 ft: 12688 corp: 3/15b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:45.013 [2024-05-15 05:31:34.885461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.885489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.013 #7 NEW cov: 11894 ft: 13271 corp: 4/17b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 InsertByte- 00:06:45.013 [2024-05-15 05:31:34.925724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007b7b cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.925750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.925854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.925870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.013 #8 NEW cov: 11979 ft: 13625 corp: 5/21b lim: 10 exec/s: 0 rss: 71Mb L: 4/10 MS: 1 CopyPart- 00:06:45.013 [2024-05-15 05:31:34.976524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.976550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.976655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.976674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.976784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.976798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.976913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.976928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:34.977026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000a902 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:34.977042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.013 #12 NEW cov: 11979 ft: 13691 corp: 6/31b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 4 ChangeBit-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:45.013 [2024-05-15 05:31:35.016089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:35.016114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.013 [2024-05-15 05:31:35.016219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001200 cdw11:00000000 00:06:45.013 [2024-05-15 05:31:35.016236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.272 #13 NEW cov: 11979 ft: 13829 corp: 7/35b lim: 10 exec/s: 0 rss: 71Mb L: 4/10 MS: 1 ShuffleBytes- 00:06:45.272 [2024-05-15 05:31:35.066211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.066237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.066342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000012 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.066360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.272 #14 NEW cov: 11979 ft: 13922 corp: 8/40b lim: 10 exec/s: 0 rss: 71Mb L: 5/10 MS: 1 CopyPart- 00:06:45.272 [2024-05-15 05:31:35.116543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001af9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.116572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.116686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.116704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.116824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.116840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.272 #15 NEW cov: 11979 ft: 14128 corp: 9/46b lim: 10 exec/s: 0 rss: 71Mb L: 6/10 MS: 1 EraseBytes- 00:06:45.272 [2024-05-15 05:31:35.177186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001ad4 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.177212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.177322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.177341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.177456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.177474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.177586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.177603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.272 [2024-05-15 05:31:35.177709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.177726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.272 #16 NEW cov: 11979 ft: 14173 corp: 10/56b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeByte- 00:06:45.272 [2024-05-15 05:31:35.226457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a41 cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.226484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.272 #17 NEW cov: 11979 ft: 14201 corp: 11/58b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 InsertByte- 00:06:45.272 [2024-05-15 05:31:35.266625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000330a cdw11:00000000 00:06:45.272 [2024-05-15 05:31:35.266653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.272 #18 NEW cov: 11979 ft: 14205 corp: 12/60b lim: 10 exec/s: 0 rss: 71Mb L: 2/10 MS: 1 InsertByte- 00:06:45.531 [2024-05-15 05:31:35.307371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001ad4 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.307408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.307536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.307553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.307662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.307679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.307792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.307807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.531 #19 NEW cov: 11979 ft: 14254 corp: 13/69b lim: 10 exec/s: 0 rss: 71Mb L: 9/10 MS: 1 EraseBytes- 00:06:45.531 [2024-05-15 05:31:35.357230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003f3f cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.357255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.357384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003f3f cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.357401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.357513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003f3f cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.357532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.357646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00003f0a cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.357662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.531 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:45.531 #20 NEW cov: 12002 ft: 14308 corp: 14/78b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:45.531 [2024-05-15 05:31:35.407695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.407720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.407831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.407863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.407973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.407990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.408095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000a9d9 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.408111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.408222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000a902 cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.408238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.531 #21 NEW cov: 12002 ft: 14337 corp: 15/88b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:06:45.531 [2024-05-15 05:31:35.457696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003fff cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.457721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.457830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.457845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.457958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.457975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.531 [2024-05-15 05:31:35.458092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.458107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.531 #22 NEW cov: 12002 ft: 14354 corp: 16/97b lim: 10 exec/s: 22 rss: 72Mb L: 9/10 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:45.531 [2024-05-15 05:31:35.507327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000410a cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.507355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.531 #23 NEW cov: 12002 ft: 14385 corp: 17/99b lim: 10 exec/s: 23 rss: 72Mb L: 2/10 MS: 1 ShuffleBytes- 00:06:45.531 [2024-05-15 05:31:35.547400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:45.531 [2024-05-15 05:31:35.547425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.790 #24 NEW cov: 12002 ft: 14409 corp: 18/101b lim: 10 exec/s: 24 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:06:45.790 [2024-05-15 05:31:35.588192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001af9 cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.588218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.588330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.588347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.588470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.588486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.588599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.588615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.588723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.588739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:45.790 #25 NEW cov: 12002 ft: 14429 corp: 19/111b lim: 10 exec/s: 25 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:45.790 [2024-05-15 05:31:35.637650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.637677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.790 #26 NEW cov: 12002 ft: 14458 corp: 20/113b lim: 10 exec/s: 26 rss: 72Mb L: 2/10 MS: 1 CopyPart- 00:06:45.790 [2024-05-15 05:31:35.688482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.688508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.688618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.688635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.688742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.688757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.688873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.688890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.790 #27 NEW cov: 12002 ft: 14471 corp: 21/122b lim: 10 exec/s: 27 rss: 72Mb L: 9/10 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:45.790 [2024-05-15 05:31:35.737985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000340a cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.738012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.790 #28 NEW cov: 12002 ft: 14486 corp: 22/124b lim: 10 exec/s: 28 rss: 72Mb L: 2/10 MS: 1 ChangeASCIIInt- 00:06:45.790 [2024-05-15 05:31:35.788018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000330a cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.788045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.790 [2024-05-15 05:31:35.788154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:45.790 [2024-05-15 05:31:35.788171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.790 #29 NEW cov: 12002 ft: 14495 corp: 23/128b lim: 10 exec/s: 29 rss: 72Mb L: 4/10 MS: 1 CrossOver- 00:06:46.050 [2024-05-15 05:31:35.828387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.828414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.828522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.828539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.828643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.828660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.050 #30 NEW cov: 12002 ft: 14509 corp: 24/135b lim: 10 exec/s: 30 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:06:46.050 [2024-05-15 05:31:35.879214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.879239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.879349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.879368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.879394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.879404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.879422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.879437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.879545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.879561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.050 #31 NEW cov: 12002 ft: 14541 corp: 25/145b lim: 10 exec/s: 31 rss: 72Mb L: 10/10 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:06:46.050 [2024-05-15 05:31:35.918196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000500 cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.918223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:35.918336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.918352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.050 #32 NEW cov: 12002 ft: 14594 corp: 26/150b lim: 10 exec/s: 32 rss: 73Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:46.050 [2024-05-15 05:31:35.968600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:46.050 [2024-05-15 05:31:35.968628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.050 #33 NEW cov: 12002 ft: 14615 corp: 27/152b lim: 10 exec/s: 33 rss: 73Mb L: 2/10 MS: 1 CopyPart- 00:06:46.050 [2024-05-15 05:31:36.008712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000830a cdw11:00000000 00:06:46.050 [2024-05-15 05:31:36.008739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.050 #34 NEW cov: 12002 ft: 14633 corp: 28/154b lim: 10 exec/s: 34 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:46.050 [2024-05-15 05:31:36.048895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:36.048923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:36.049031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:36.049048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.050 [2024-05-15 05:31:36.049158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.050 [2024-05-15 05:31:36.049176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.050 #36 NEW cov: 12002 ft: 14651 corp: 29/161b lim: 10 exec/s: 36 rss: 73Mb L: 7/10 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:46.310 [2024-05-15 05:31:36.089204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.089232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.089338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.089355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.089470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.089487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.310 #38 NEW cov: 12002 ft: 14671 corp: 30/167b lim: 10 exec/s: 38 rss: 73Mb L: 6/10 MS: 2 EraseBytes-CrossOver- 00:06:46.310 [2024-05-15 05:31:36.139708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003fff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.139734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.139847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.139863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.139982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.139998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.140111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.140128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.310 #39 NEW cov: 12002 ft: 14709 corp: 31/176b lim: 10 exec/s: 39 rss: 73Mb L: 9/10 MS: 1 ShuffleBytes- 00:06:46.310 [2024-05-15 05:31:36.199624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00007b0a cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.199651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.199761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.199777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.199888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.199906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.310 #40 NEW cov: 12002 ft: 14743 corp: 32/182b lim: 10 exec/s: 40 rss: 73Mb L: 6/10 MS: 1 CrossOver- 00:06:46.310 [2024-05-15 05:31:36.240152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001af9 cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.240177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.240281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.240297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.240435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.240452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.240560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.240577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.240683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000f9f9 cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.240701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.310 #41 NEW cov: 12002 ft: 14751 corp: 33/192b lim: 10 exec/s: 41 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:46.310 [2024-05-15 05:31:36.279885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.279910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.280016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000efff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.280031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.310 [2024-05-15 05:31:36.280140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.280156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.310 #42 NEW cov: 12002 ft: 14756 corp: 34/199b lim: 10 exec/s: 42 rss: 73Mb L: 7/10 MS: 1 ChangeBit- 00:06:46.310 [2024-05-15 05:31:36.329602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c27b cdw11:00000000 00:06:46.310 [2024-05-15 05:31:36.329630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.569 #43 NEW cov: 12002 ft: 14769 corp: 35/202b lim: 10 exec/s: 43 rss: 73Mb L: 3/10 MS: 1 InsertByte- 00:06:46.569 [2024-05-15 05:31:36.370105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.370131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.569 [2024-05-15 05:31:36.370244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.370261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.569 [2024-05-15 05:31:36.370375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000023ff cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.370394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.569 #44 NEW cov: 12002 ft: 14775 corp: 36/209b lim: 10 exec/s: 44 rss: 73Mb L: 7/10 MS: 1 InsertByte- 00:06:46.569 [2024-05-15 05:31:36.419744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000500 cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.419773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.569 [2024-05-15 05:31:36.419891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000040 cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.419908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.569 #45 NEW cov: 12002 ft: 14805 corp: 37/214b lim: 10 exec/s: 45 rss: 73Mb L: 5/10 MS: 1 ChangeBit- 00:06:46.569 [2024-05-15 05:31:36.470519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.470546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.569 [2024-05-15 05:31:36.470679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000dfff cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.470696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.569 [2024-05-15 05:31:36.470813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:46.569 [2024-05-15 05:31:36.470831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.569 #46 NEW cov: 12002 ft: 14820 corp: 38/221b lim: 10 exec/s: 23 rss: 73Mb L: 7/10 MS: 1 ChangeBit- 00:06:46.569 #46 DONE cov: 12002 ft: 14820 corp: 38/221b lim: 10 exec/s: 23 rss: 73Mb 00:06:46.569 ###### Recommended dictionary. ###### 00:06:46.569 "\377\377\377\377\377\377\377\377" # Uses: 2 00:06:46.569 ###### End of recommended dictionary. ###### 00:06:46.569 Done 46 runs in 2 second(s) 00:06:46.569 [2024-05-15 05:31:36.491867] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.827 05:31:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:46.827 [2024-05-15 05:31:36.661516] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:46.827 [2024-05-15 05:31:36.661583] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269758 ] 00:06:46.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.827 [2024-05-15 05:31:36.839999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.085 [2024-05-15 05:31:36.905937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.085 [2024-05-15 05:31:36.964806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.085 [2024-05-15 05:31:36.980760] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:47.085 [2024-05-15 05:31:36.981162] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:47.085 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.085 INFO: Seed: 3795390609 00:06:47.085 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:47.085 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:47.085 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:47.085 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.085 [2024-05-15 05:31:37.036480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.085 [2024-05-15 05:31:37.036509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.085 #2 INITED cov: 11785 ft: 11783 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:47.085 [2024-05-15 05:31:37.076651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.085 [2024-05-15 05:31:37.076677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.085 [2024-05-15 05:31:37.076738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.085 [2024-05-15 05:31:37.076756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.343 #3 NEW cov: 11916 ft: 13029 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:47.343 [2024-05-15 05:31:37.126830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.126856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.343 [2024-05-15 05:31:37.126915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.126929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.343 #4 NEW cov: 11922 ft: 13298 corp: 3/5b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:47.343 [2024-05-15 05:31:37.166891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.166918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.343 [2024-05-15 05:31:37.166978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.166993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.343 #5 NEW cov: 12007 ft: 13564 corp: 4/7b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:47.343 [2024-05-15 05:31:37.217100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.217127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.343 [2024-05-15 05:31:37.217186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.217201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.343 #6 NEW cov: 12007 ft: 13706 corp: 5/9b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:06:47.343 [2024-05-15 05:31:37.267016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.267042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.343 #7 NEW cov: 12007 ft: 13816 corp: 6/10b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 EraseBytes- 00:06:47.343 [2024-05-15 05:31:37.307306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.307332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.343 [2024-05-15 05:31:37.307392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.307406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.343 #8 NEW cov: 12007 ft: 13926 corp: 7/12b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeByte- 00:06:47.343 [2024-05-15 05:31:37.357456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.357485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.343 [2024-05-15 05:31:37.357546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.343 [2024-05-15 05:31:37.357560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.602 #9 NEW cov: 12007 ft: 13947 corp: 8/14b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:06:47.602 [2024-05-15 05:31:37.397404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.397430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.602 #10 NEW cov: 12007 ft: 13968 corp: 9/15b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 EraseBytes- 00:06:47.602 [2024-05-15 05:31:37.437545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.437569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.602 #11 NEW cov: 12016 ft: 14036 corp: 10/16b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 EraseBytes- 00:06:47.602 [2024-05-15 05:31:37.487831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.487857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.602 [2024-05-15 05:31:37.487915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.487930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.602 #12 NEW cov: 12016 ft: 14079 corp: 11/18b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CopyPart- 00:06:47.602 [2024-05-15 05:31:37.537944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.537970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.602 [2024-05-15 05:31:37.538029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.538043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.602 #13 NEW cov: 12016 ft: 14134 corp: 12/20b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 CopyPart- 00:06:47.602 [2024-05-15 05:31:37.588272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.588297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.602 [2024-05-15 05:31:37.588357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.588371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.602 [2024-05-15 05:31:37.588434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.602 [2024-05-15 05:31:37.588449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.602 #14 NEW cov: 12016 ft: 14336 corp: 13/23b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 CopyPart- 00:06:47.861 [2024-05-15 05:31:37.628210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.628236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.861 [2024-05-15 05:31:37.628297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.628311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.861 #15 NEW cov: 12016 ft: 14359 corp: 14/25b lim: 5 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 CrossOver- 00:06:47.861 [2024-05-15 05:31:37.678860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.678885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.861 [2024-05-15 05:31:37.678945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.678959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.861 [2024-05-15 05:31:37.679019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.679032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.861 [2024-05-15 05:31:37.679089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.679103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.861 [2024-05-15 05:31:37.679161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.861 [2024-05-15 05:31:37.679175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:47.861 #16 NEW cov: 12016 ft: 14741 corp: 15/30b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:47.862 [2024-05-15 05:31:37.718283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.862 [2024-05-15 05:31:37.718309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.862 #17 NEW cov: 12016 ft: 14810 corp: 16/31b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:06:47.862 [2024-05-15 05:31:37.758435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.862 [2024-05-15 05:31:37.758461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.862 #18 NEW cov: 12016 ft: 14829 corp: 17/32b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:06:47.862 [2024-05-15 05:31:37.798692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.862 [2024-05-15 05:31:37.798717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.862 [2024-05-15 05:31:37.798775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.862 [2024-05-15 05:31:37.798792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.862 #19 NEW cov: 12016 ft: 14905 corp: 18/34b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:06:47.862 [2024-05-15 05:31:37.848823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.862 [2024-05-15 05:31:37.848849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.862 [2024-05-15 05:31:37.848906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:47.862 [2024-05-15 05:31:37.848920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.862 #20 NEW cov: 12016 ft: 14951 corp: 19/36b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CrossOver- 00:06:48.121 [2024-05-15 05:31:37.899109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-15 05:31:37.899135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.121 [2024-05-15 05:31:37.899198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-15 05:31:37.899212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.121 [2024-05-15 05:31:37.899272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.121 [2024-05-15 05:31:37.899286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.380 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:48.380 #21 NEW cov: 12039 ft: 15026 corp: 20/39b lim: 5 exec/s: 21 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:48.380 [2024-05-15 05:31:38.210038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.210072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.210135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.210150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.210208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.210222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.380 #22 NEW cov: 12039 ft: 15061 corp: 21/42b lim: 5 exec/s: 22 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:48.380 [2024-05-15 05:31:38.250023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.250050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.250114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.250131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.250188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.250203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.380 #23 NEW cov: 12039 ft: 15117 corp: 22/45b lim: 5 exec/s: 23 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:48.380 [2024-05-15 05:31:38.300183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.300209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.300269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.300283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.300343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.300357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.380 #24 NEW cov: 12039 ft: 15136 corp: 23/48b lim: 5 exec/s: 24 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:48.380 [2024-05-15 05:31:38.350223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.350249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.350311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.350325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.380 #25 NEW cov: 12039 ft: 15144 corp: 24/50b lim: 5 exec/s: 25 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:48.380 [2024-05-15 05:31:38.400348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.400374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.380 [2024-05-15 05:31:38.400443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.380 [2024-05-15 05:31:38.400458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.640 #26 NEW cov: 12039 ft: 15165 corp: 25/52b lim: 5 exec/s: 26 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:48.640 [2024-05-15 05:31:38.440397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.440422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.640 [2024-05-15 05:31:38.440483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.440498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.640 #27 NEW cov: 12039 ft: 15198 corp: 26/54b lim: 5 exec/s: 27 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:48.640 [2024-05-15 05:31:38.480902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.480926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.640 [2024-05-15 05:31:38.480988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.481001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.640 [2024-05-15 05:31:38.481059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.481073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.640 [2024-05-15 05:31:38.481133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.481146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.640 #28 NEW cov: 12039 ft: 15212 corp: 27/58b lim: 5 exec/s: 28 rss: 73Mb L: 4/5 MS: 1 EraseBytes- 00:06:48.640 [2024-05-15 05:31:38.530538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.530563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.640 #29 NEW cov: 12039 ft: 15224 corp: 28/59b lim: 5 exec/s: 29 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:48.640 [2024-05-15 05:31:38.580683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.580709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.640 #30 NEW cov: 12039 ft: 15238 corp: 29/60b lim: 5 exec/s: 30 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:48.640 [2024-05-15 05:31:38.631164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.631190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.640 [2024-05-15 05:31:38.631250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.631264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.640 [2024-05-15 05:31:38.631323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.640 [2024-05-15 05:31:38.631337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.640 #31 NEW cov: 12039 ft: 15278 corp: 30/63b lim: 5 exec/s: 31 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:48.899 [2024-05-15 05:31:38.671027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.671053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.671116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.671131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.899 #32 NEW cov: 12039 ft: 15284 corp: 31/65b lim: 5 exec/s: 32 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:48.899 [2024-05-15 05:31:38.711191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.711218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.711278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.711292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.899 #33 NEW cov: 12039 ft: 15326 corp: 32/67b lim: 5 exec/s: 33 rss: 73Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:48.899 [2024-05-15 05:31:38.761497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.761523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.761587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.761601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.761660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.761674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.899 #34 NEW cov: 12039 ft: 15339 corp: 33/70b lim: 5 exec/s: 34 rss: 73Mb L: 3/5 MS: 1 ChangeByte- 00:06:48.899 [2024-05-15 05:31:38.801792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.801818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.801876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.801891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.801947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.801961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.802020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.802033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.899 #35 NEW cov: 12039 ft: 15345 corp: 34/74b lim: 5 exec/s: 35 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:06:48.899 [2024-05-15 05:31:38.851585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.851615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.851675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.851690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.899 #36 NEW cov: 12039 ft: 15347 corp: 35/76b lim: 5 exec/s: 36 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:06:48.899 [2024-05-15 05:31:38.891653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.891680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.899 [2024-05-15 05:31:38.891741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.899 [2024-05-15 05:31:38.891756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.899 #37 NEW cov: 12039 ft: 15361 corp: 36/78b lim: 5 exec/s: 37 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:06:49.159 [2024-05-15 05:31:38.931582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.159 [2024-05-15 05:31:38.931608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.159 #38 NEW cov: 12039 ft: 15418 corp: 37/79b lim: 5 exec/s: 38 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:06:49.159 [2024-05-15 05:31:38.971872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.159 [2024-05-15 05:31:38.971898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.159 [2024-05-15 05:31:38.971958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.159 [2024-05-15 05:31:38.971973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.159 #39 NEW cov: 12039 ft: 15432 corp: 38/81b lim: 5 exec/s: 39 rss: 74Mb L: 2/5 MS: 1 CMP- DE: "\377\377"- 00:06:49.159 [2024-05-15 05:31:39.011817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.159 [2024-05-15 05:31:39.011844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.159 #40 NEW cov: 12039 ft: 15433 corp: 39/82b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ChangeBit- 00:06:49.159 #40 DONE cov: 12039 ft: 15433 corp: 39/82b lim: 5 exec/s: 20 rss: 74Mb 00:06:49.159 ###### Recommended dictionary. ###### 00:06:49.159 "\377\377" # Uses: 0 00:06:49.159 ###### End of recommended dictionary. ###### 00:06:49.159 Done 40 runs in 2 second(s) 00:06:49.159 [2024-05-15 05:31:39.034122] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.159 05:31:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:49.418 [2024-05-15 05:31:39.200615] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:49.418 [2024-05-15 05:31:39.200685] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270272 ] 00:06:49.418 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.418 [2024-05-15 05:31:39.378106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.677 [2024-05-15 05:31:39.443643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.677 [2024-05-15 05:31:39.502542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.677 [2024-05-15 05:31:39.518494] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:49.677 [2024-05-15 05:31:39.518922] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:49.677 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.677 INFO: Seed: 2038413179 00:06:49.677 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:49.677 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:49.677 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:49.677 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.677 [2024-05-15 05:31:39.574190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.677 [2024-05-15 05:31:39.574218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.677 #2 INITED cov: 11783 ft: 11787 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:49.677 [2024-05-15 05:31:39.614366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.677 [2024-05-15 05:31:39.614399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.677 [2024-05-15 05:31:39.614459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.677 [2024-05-15 05:31:39.614477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.677 #3 NEW cov: 11916 ft: 13116 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 CrossOver- 00:06:49.677 [2024-05-15 05:31:39.664328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.677 [2024-05-15 05:31:39.664353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.677 #4 NEW cov: 11922 ft: 13280 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 EraseBytes- 00:06:49.936 [2024-05-15 05:31:39.714529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.714555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.936 #5 NEW cov: 12007 ft: 13532 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 CopyPart- 00:06:49.936 [2024-05-15 05:31:39.764985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.765010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.765068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.765082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.765140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.765153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.936 #6 NEW cov: 12007 ft: 13880 corp: 5/8b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:06:49.936 [2024-05-15 05:31:39.814932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.814958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.815014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.815028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.936 #7 NEW cov: 12007 ft: 14005 corp: 6/10b lim: 5 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 EraseBytes- 00:06:49.936 [2024-05-15 05:31:39.865527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.865553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.865609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.865623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.865679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.865693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.865749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.865763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.865818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.865831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.936 #8 NEW cov: 12007 ft: 14381 corp: 7/15b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "V\000\000\000"- 00:06:49.936 [2024-05-15 05:31:39.905161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.905186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.905242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.905257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.936 #9 NEW cov: 12007 ft: 14480 corp: 8/17b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:06:49.936 [2024-05-15 05:31:39.945280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.945305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.936 [2024-05-15 05:31:39.945361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.936 [2024-05-15 05:31:39.945375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.195 #10 NEW cov: 12007 ft: 14525 corp: 9/19b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:06:50.195 [2024-05-15 05:31:39.985857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.195 [2024-05-15 05:31:39.985883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.195 [2024-05-15 05:31:39.985938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.195 [2024-05-15 05:31:39.985952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.195 [2024-05-15 05:31:39.986008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.195 [2024-05-15 05:31:39.986022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.195 [2024-05-15 05:31:39.986077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:39.986090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:39.986143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:39.986157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:50.196 #11 NEW cov: 12007 ft: 14579 corp: 10/24b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:50.196 [2024-05-15 05:31:40.025707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.025733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:40.025795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.025810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:40.025869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.025883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.196 #12 NEW cov: 12007 ft: 14605 corp: 11/27b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 InsertByte- 00:06:50.196 [2024-05-15 05:31:40.075817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.075847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:40.075909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.075924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.196 #13 NEW cov: 12007 ft: 14638 corp: 12/29b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:50.196 [2024-05-15 05:31:40.115925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.115952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:40.116013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.116028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:40.116086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.116099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.196 #14 NEW cov: 12007 ft: 14745 corp: 13/32b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 CopyPart- 00:06:50.196 [2024-05-15 05:31:40.155865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.155892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.196 [2024-05-15 05:31:40.155952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.155966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.196 #15 NEW cov: 12007 ft: 14772 corp: 14/34b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:50.196 [2024-05-15 05:31:40.195808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.196 [2024-05-15 05:31:40.195834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.455 #16 NEW cov: 12007 ft: 14848 corp: 15/35b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:06:50.455 [2024-05-15 05:31:40.246080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.246107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.246167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.246181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.455 #17 NEW cov: 12007 ft: 14856 corp: 16/37b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:06:50.455 [2024-05-15 05:31:40.286218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.286245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.286302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.286318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.455 #18 NEW cov: 12007 ft: 14885 corp: 17/39b lim: 5 exec/s: 0 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:50.455 [2024-05-15 05:31:40.336673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.336700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.336758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.336772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.336831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.336845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.336901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.336915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.455 #19 NEW cov: 12007 ft: 14895 corp: 18/43b lim: 5 exec/s: 0 rss: 70Mb L: 4/5 MS: 1 CrossOver- 00:06:50.455 [2024-05-15 05:31:40.376315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.376341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.455 #20 NEW cov: 12007 ft: 14944 corp: 19/44b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:06:50.455 [2024-05-15 05:31:40.417012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.417042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.417104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.417118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.417175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.417189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.455 [2024-05-15 05:31:40.417245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.455 [2024-05-15 05:31:40.417259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.714 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:50.714 #21 NEW cov: 12030 ft: 14959 corp: 20/48b lim: 5 exec/s: 21 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:06:50.973 [2024-05-15 05:31:40.737503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.737538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.737598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.737613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.973 #22 NEW cov: 12030 ft: 14970 corp: 21/50b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:50.973 [2024-05-15 05:31:40.777713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.777739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.777800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.777814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.777872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.777886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.973 #23 NEW cov: 12030 ft: 14983 corp: 22/53b lim: 5 exec/s: 23 rss: 72Mb L: 3/5 MS: 1 EraseBytes- 00:06:50.973 [2024-05-15 05:31:40.828062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.828087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.828147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.828161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.828218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.828232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.828286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.828300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.973 #24 NEW cov: 12030 ft: 15005 corp: 23/57b lim: 5 exec/s: 24 rss: 72Mb L: 4/5 MS: 1 CrossOver- 00:06:50.973 [2024-05-15 05:31:40.877967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.877992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.878053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.878067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.878125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.878140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.973 #25 NEW cov: 12030 ft: 15021 corp: 24/60b lim: 5 exec/s: 25 rss: 72Mb L: 3/5 MS: 1 CopyPart- 00:06:50.973 [2024-05-15 05:31:40.918242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.918267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.918327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.918341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.918402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.918416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.918472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.918485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:50.973 #26 NEW cov: 12030 ft: 15035 corp: 25/64b lim: 5 exec/s: 26 rss: 72Mb L: 4/5 MS: 1 InsertByte- 00:06:50.973 [2024-05-15 05:31:40.958036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.958061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.973 [2024-05-15 05:31:40.958118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.973 [2024-05-15 05:31:40.958135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.973 #27 NEW cov: 12030 ft: 15044 corp: 26/66b lim: 5 exec/s: 27 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:51.233 [2024-05-15 05:31:40.998342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:40.998368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:40.998443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:40.998458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:40.998514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:40.998528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.233 #28 NEW cov: 12030 ft: 15074 corp: 27/69b lim: 5 exec/s: 28 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:51.233 [2024-05-15 05:31:41.038474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.038499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:41.038555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.038568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:41.038625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.038639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.233 #29 NEW cov: 12030 ft: 15102 corp: 28/72b lim: 5 exec/s: 29 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:51.233 [2024-05-15 05:31:41.088451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.088477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:41.088538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.088553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.233 #30 NEW cov: 12030 ft: 15113 corp: 29/74b lim: 5 exec/s: 30 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:51.233 [2024-05-15 05:31:41.128563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.128588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:41.128647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.128662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.233 #31 NEW cov: 12030 ft: 15130 corp: 30/76b lim: 5 exec/s: 31 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:51.233 [2024-05-15 05:31:41.178735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.178760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:41.178816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.178831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.233 #32 NEW cov: 12030 ft: 15167 corp: 31/78b lim: 5 exec/s: 32 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:51.233 [2024-05-15 05:31:41.218824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.218850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.233 [2024-05-15 05:31:41.218907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.233 [2024-05-15 05:31:41.218921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.233 #33 NEW cov: 12030 ft: 15174 corp: 32/80b lim: 5 exec/s: 33 rss: 72Mb L: 2/5 MS: 1 InsertByte- 00:06:51.493 [2024-05-15 05:31:41.259286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.259311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.259369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.259390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.259445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.259460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.259513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.259527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.493 #34 NEW cov: 12030 ft: 15196 corp: 33/84b lim: 5 exec/s: 34 rss: 72Mb L: 4/5 MS: 1 InsertByte- 00:06:51.493 [2024-05-15 05:31:41.309250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.309276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.309336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.309350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.309411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.309429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.493 #35 NEW cov: 12030 ft: 15218 corp: 34/87b lim: 5 exec/s: 35 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:06:51.493 [2024-05-15 05:31:41.349216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.349242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.349299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.349313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.493 #36 NEW cov: 12030 ft: 15222 corp: 35/89b lim: 5 exec/s: 36 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:51.493 [2024-05-15 05:31:41.389474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.389500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.389556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.389570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.389628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.389642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.493 #37 NEW cov: 12030 ft: 15232 corp: 36/92b lim: 5 exec/s: 37 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:51.493 [2024-05-15 05:31:41.439460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.439485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.439539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.439553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.493 #38 NEW cov: 12030 ft: 15262 corp: 37/94b lim: 5 exec/s: 38 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:51.493 [2024-05-15 05:31:41.479725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.479750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.479807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.479822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.493 [2024-05-15 05:31:41.479879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.493 [2024-05-15 05:31:41.479893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.493 #39 NEW cov: 12030 ft: 15306 corp: 38/97b lim: 5 exec/s: 39 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:51.752 [2024-05-15 05:31:41.529719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.752 [2024-05-15 05:31:41.529744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.752 [2024-05-15 05:31:41.529800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.752 [2024-05-15 05:31:41.529814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.752 #40 NEW cov: 12030 ft: 15316 corp: 39/99b lim: 5 exec/s: 20 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:51.752 #40 DONE cov: 12030 ft: 15316 corp: 39/99b lim: 5 exec/s: 20 rss: 73Mb 00:06:51.752 ###### Recommended dictionary. ###### 00:06:51.752 "V\000\000\000" # Uses: 0 00:06:51.752 ###### End of recommended dictionary. ###### 00:06:51.752 Done 40 runs in 2 second(s) 00:06:51.752 [2024-05-15 05:31:41.559501] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.752 05:31:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:51.752 [2024-05-15 05:31:41.729013] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:51.752 [2024-05-15 05:31:41.729083] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270580 ] 00:06:51.752 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.011 [2024-05-15 05:31:41.913499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.011 [2024-05-15 05:31:41.979748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.271 [2024-05-15 05:31:42.038924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.271 [2024-05-15 05:31:42.054874] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:52.271 [2024-05-15 05:31:42.055310] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:52.271 INFO: Running with entropic power schedule (0xFF, 100). 00:06:52.271 INFO: Seed: 278458148 00:06:52.271 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:52.271 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:52.271 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:52.271 INFO: A corpus is not provided, starting from an empty corpus 00:06:52.271 #2 INITED exec/s: 0 rss: 64Mb 00:06:52.271 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:52.271 This may also happen if the target rejected all inputs we tried so far 00:06:52.271 [2024-05-15 05:31:42.100792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.271 [2024-05-15 05:31:42.100821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.271 [2024-05-15 05:31:42.100878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.271 [2024-05-15 05:31:42.100891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.271 [2024-05-15 05:31:42.100950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.271 [2024-05-15 05:31:42.100963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.271 [2024-05-15 05:31:42.101017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.271 [2024-05-15 05:31:42.101030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.531 NEW_FUNC[1/685]: 0x48eb90 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:52.531 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.531 #14 NEW cov: 11809 ft: 11810 corp: 2/38b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:52.531 [2024-05-15 05:31:42.442919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.531 [2024-05-15 05:31:42.442978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.531 [2024-05-15 05:31:42.443145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00290000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.531 [2024-05-15 05:31:42.443175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.531 [2024-05-15 05:31:42.443347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.531 [2024-05-15 05:31:42.443375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.531 [2024-05-15 05:31:42.443543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.531 [2024-05-15 05:31:42.443575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.531 #15 NEW cov: 11939 ft: 12607 corp: 3/76b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertByte- 00:06:52.532 [2024-05-15 05:31:42.491772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a832c00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.532 [2024-05-15 05:31:42.491799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.532 #19 NEW cov: 11945 ft: 13388 corp: 4/90b lim: 40 exec/s: 0 rss: 70Mb L: 14/38 MS: 4 CopyPart-InsertByte-InsertByte-InsertRepeatedBytes- 00:06:52.532 [2024-05-15 05:31:42.532448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:28ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.532 [2024-05-15 05:31:42.532477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.532 [2024-05-15 05:31:42.532611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.532 [2024-05-15 05:31:42.532627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.532 [2024-05-15 05:31:42.532763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.532 [2024-05-15 05:31:42.532780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.532 [2024-05-15 05:31:42.532910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.532 [2024-05-15 05:31:42.532926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.791 #21 NEW cov: 12030 ft: 13713 corp: 5/127b lim: 40 exec/s: 0 rss: 70Mb L: 37/38 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:52.791 [2024-05-15 05:31:42.572891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.572917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.573042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00290000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.573059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.573187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.573202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.573330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.573346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.791 #22 NEW cov: 12030 ft: 13817 corp: 6/165b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 ShuffleBytes- 00:06:52.791 [2024-05-15 05:31:42.633184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.633210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.633345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.633362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.633491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.633506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.633648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.633664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.791 #28 NEW cov: 12030 ft: 13981 corp: 7/202b lim: 40 exec/s: 0 rss: 70Mb L: 37/38 MS: 1 CrossOver- 00:06:52.791 [2024-05-15 05:31:42.673032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.673057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.673194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.673211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.791 [2024-05-15 05:31:42.673343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.791 [2024-05-15 05:31:42.673358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.673493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.673509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.792 #29 NEW cov: 12030 ft: 14050 corp: 8/234b lim: 40 exec/s: 0 rss: 70Mb L: 32/38 MS: 1 EraseBytes- 00:06:52.792 [2024-05-15 05:31:42.733526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.733553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.733698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.733715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.733842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00410000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.733859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.733989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.734008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.792 #30 NEW cov: 12030 ft: 14107 corp: 9/271b lim: 40 exec/s: 0 rss: 70Mb L: 37/38 MS: 1 ChangeByte- 00:06:52.792 [2024-05-15 05:31:42.783709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.783735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.783865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.783879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.784013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.784029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.792 [2024-05-15 05:31:42.784154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.792 [2024-05-15 05:31:42.784169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.792 #31 NEW cov: 12030 ft: 14131 corp: 10/309b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertByte- 00:06:53.050 [2024-05-15 05:31:42.823125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a837e2c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.050 [2024-05-15 05:31:42.823152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.050 #37 NEW cov: 12030 ft: 14215 corp: 11/324b lim: 40 exec/s: 0 rss: 71Mb L: 15/38 MS: 1 InsertByte- 00:06:53.050 [2024-05-15 05:31:42.883940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000c2e2c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.050 [2024-05-15 05:31:42.883966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.050 [2024-05-15 05:31:42.884103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.050 [2024-05-15 05:31:42.884121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.050 [2024-05-15 05:31:42.884260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.050 [2024-05-15 05:31:42.884276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.050 [2024-05-15 05:31:42.884412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.050 [2024-05-15 05:31:42.884429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.050 #38 NEW cov: 12030 ft: 14239 corp: 12/361b lim: 40 exec/s: 0 rss: 71Mb L: 37/38 MS: 1 CMP- DE: "\014.,\002\000\000\000\000"- 00:06:53.050 [2024-05-15 05:31:42.924112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.050 [2024-05-15 05:31:42.924139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.050 [2024-05-15 05:31:42.924288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:42.924308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:42.924439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:42.924456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:42.924582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:42.924598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.051 #39 NEW cov: 12030 ft: 14337 corp: 13/400b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 CrossOver- 00:06:53.051 [2024-05-15 05:31:42.973611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:42.973637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.051 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:53.051 #42 NEW cov: 12053 ft: 14409 corp: 14/414b lim: 40 exec/s: 0 rss: 71Mb L: 14/39 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:06:53.051 [2024-05-15 05:31:43.024454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.024479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:43.024625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.024641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:43.024773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.024790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:43.024930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.024946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.051 #43 NEW cov: 12053 ft: 14431 corp: 15/452b lim: 40 exec/s: 0 rss: 71Mb L: 38/39 MS: 1 CopyPart- 00:06:53.051 [2024-05-15 05:31:43.064532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00100000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.064557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:43.064685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.064700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:43.064829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.064846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.051 [2024-05-15 05:31:43.064982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.051 [2024-05-15 05:31:43.064999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.309 #49 NEW cov: 12053 ft: 14470 corp: 16/484b lim: 40 exec/s: 49 rss: 71Mb L: 32/39 MS: 1 ChangeBit- 00:06:53.309 [2024-05-15 05:31:43.114068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:000c2e2c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.114094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.114213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.114229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.114364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.114383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.309 #50 NEW cov: 12053 ft: 14742 corp: 17/514b lim: 40 exec/s: 50 rss: 71Mb L: 30/39 MS: 1 EraseBytes- 00:06:53.309 [2024-05-15 05:31:43.174937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.174963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.175099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00290000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.175113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.175255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.175270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.175385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.175402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.309 #51 NEW cov: 12053 ft: 14769 corp: 18/552b lim: 40 exec/s: 51 rss: 71Mb L: 38/39 MS: 1 CrossOver- 00:06:53.309 [2024-05-15 05:31:43.214643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.214670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.214796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.214812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.214925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000400 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.214944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.215076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.215093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.309 #52 NEW cov: 12053 ft: 14780 corp: 19/589b lim: 40 exec/s: 52 rss: 71Mb L: 37/39 MS: 1 ChangeByte- 00:06:53.309 [2024-05-15 05:31:43.254942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:005b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.254968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.255104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.255121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.255251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.255268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.255385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.255396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.309 #53 NEW cov: 12053 ft: 14801 corp: 20/627b lim: 40 exec/s: 53 rss: 71Mb L: 38/39 MS: 1 ChangeByte- 00:06:53.309 [2024-05-15 05:31:43.304575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.304603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.304747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.304765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.309 [2024-05-15 05:31:43.304897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.309 [2024-05-15 05:31:43.304914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.309 #54 NEW cov: 12053 ft: 14852 corp: 21/651b lim: 40 exec/s: 54 rss: 71Mb L: 24/39 MS: 1 EraseBytes- 00:06:53.569 [2024-05-15 05:31:43.345071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.345098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.345230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000002f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.345245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.345385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.345405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.345531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.345549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.569 #55 NEW cov: 12053 ft: 14866 corp: 22/689b lim: 40 exec/s: 55 rss: 71Mb L: 38/39 MS: 1 InsertByte- 00:06:53.569 [2024-05-15 05:31:43.405591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:005b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.405619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.405748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:5b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.405763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.405898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.405913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.406054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.406069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.569 #56 NEW cov: 12053 ft: 14876 corp: 23/727b lim: 40 exec/s: 56 rss: 71Mb L: 38/39 MS: 1 CopyPart- 00:06:53.569 [2024-05-15 05:31:43.465804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:005b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.465832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.465967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.465985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.466120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.466138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.466271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.466289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.569 #57 NEW cov: 12053 ft: 14895 corp: 24/766b lim: 40 exec/s: 57 rss: 72Mb L: 39/39 MS: 1 InsertByte- 00:06:53.569 [2024-05-15 05:31:43.515964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.515992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.516126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.516144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.516281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.516298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.516432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:003a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.516449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.569 #58 NEW cov: 12053 ft: 14910 corp: 25/804b lim: 40 exec/s: 58 rss: 72Mb L: 38/39 MS: 1 ShuffleBytes- 00:06:53.569 [2024-05-15 05:31:43.576087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.576115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.576252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000002f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.576269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.576415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.576434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.569 [2024-05-15 05:31:43.576570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000027 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.569 [2024-05-15 05:31:43.576587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.829 #59 NEW cov: 12053 ft: 14914 corp: 26/843b lim: 40 exec/s: 59 rss: 72Mb L: 39/39 MS: 1 InsertByte- 00:06:53.829 [2024-05-15 05:31:43.635818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.635846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.635974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.635992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.829 #60 NEW cov: 12053 ft: 15125 corp: 27/861b lim: 40 exec/s: 60 rss: 72Mb L: 18/39 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:53.829 [2024-05-15 05:31:43.696494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.696521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.696668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000002f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.696686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.696820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00004100 cdw11:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.696836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.696972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000027 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.696989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.829 #61 NEW cov: 12053 ft: 15130 corp: 28/900b lim: 40 exec/s: 61 rss: 72Mb L: 39/39 MS: 1 ChangeByte- 00:06:53.829 [2024-05-15 05:31:43.746124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:005b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.746150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.746282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.746298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.746432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00007a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.746447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.829 #62 NEW cov: 12053 ft: 15136 corp: 29/928b lim: 40 exec/s: 62 rss: 72Mb L: 28/39 MS: 1 EraseBytes- 00:06:53.829 [2024-05-15 05:31:43.806749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.806775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.806907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.806924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.807052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:832c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.807069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.829 [2024-05-15 05:31:43.807201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.829 [2024-05-15 05:31:43.807217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.829 #63 NEW cov: 12053 ft: 15155 corp: 30/961b lim: 40 exec/s: 63 rss: 72Mb L: 33/39 MS: 1 InsertRepeatedBytes- 00:06:54.126 [2024-05-15 05:31:43.857206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0ae2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.857233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.857387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.857404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.857545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.857562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.857698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.857714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.857850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.857867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:54.126 #64 NEW cov: 12053 ft: 15200 corp: 31/1001b lim: 40 exec/s: 64 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:54.126 [2024-05-15 05:31:43.896438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff7fffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.896465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.896591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.896608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.126 #65 NEW cov: 12053 ft: 15202 corp: 32/1020b lim: 40 exec/s: 65 rss: 72Mb L: 19/40 MS: 1 InsertByte- 00:06:54.126 [2024-05-15 05:31:43.946953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:5b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.946980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.947120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.947138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.947270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:007a0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.947287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.126 #66 NEW cov: 12053 ft: 15219 corp: 33/1048b lim: 40 exec/s: 66 rss: 72Mb L: 28/40 MS: 1 CopyPart- 00:06:54.126 [2024-05-15 05:31:43.997247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:005b0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.997274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.997387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:5b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.997403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.997537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:05000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.997553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.126 [2024-05-15 05:31:43.997690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:43.997705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.126 #67 NEW cov: 12053 ft: 15227 corp: 34/1086b lim: 40 exec/s: 67 rss: 73Mb L: 38/40 MS: 1 ChangeBinInt- 00:06:54.126 [2024-05-15 05:31:44.046385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a837e2c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.126 [2024-05-15 05:31:44.046411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.126 #68 NEW cov: 12053 ft: 15250 corp: 35/1099b lim: 40 exec/s: 68 rss: 73Mb L: 13/40 MS: 1 EraseBytes- 00:06:54.126 [2024-05-15 05:31:44.097191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:005b7e2c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.127 [2024-05-15 05:31:44.097216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.127 [2024-05-15 05:31:44.097359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.127 [2024-05-15 05:31:44.097376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.127 [2024-05-15 05:31:44.097516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:05000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.127 [2024-05-15 05:31:44.097533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.127 [2024-05-15 05:31:44.097666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.127 [2024-05-15 05:31:44.097681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.387 #69 NEW cov: 12053 ft: 15292 corp: 36/1137b lim: 40 exec/s: 34 rss: 73Mb L: 38/40 MS: 1 CrossOver- 00:06:54.387 #69 DONE cov: 12053 ft: 15292 corp: 36/1137b lim: 40 exec/s: 34 rss: 73Mb 00:06:54.387 ###### Recommended dictionary. ###### 00:06:54.387 "\014.,\002\000\000\000\000" # Uses: 0 00:06:54.387 "\000\000\000\000" # Uses: 0 00:06:54.387 ###### End of recommended dictionary. ###### 00:06:54.387 Done 69 runs in 2 second(s) 00:06:54.387 [2024-05-15 05:31:44.126059] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.387 05:31:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:54.387 [2024-05-15 05:31:44.292932] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:54.387 [2024-05-15 05:31:44.293002] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271110 ] 00:06:54.387 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.646 [2024-05-15 05:31:44.470909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.646 [2024-05-15 05:31:44.541032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.646 [2024-05-15 05:31:44.601646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.646 [2024-05-15 05:31:44.617605] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:54.646 [2024-05-15 05:31:44.617963] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:54.646 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.646 INFO: Seed: 2840458973 00:06:54.646 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:54.646 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:54.646 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:54.646 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.646 #2 INITED exec/s: 0 rss: 63Mb 00:06:54.646 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.646 This may also happen if the target rejected all inputs we tried so far 00:06:54.904 [2024-05-15 05:31:44.667166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.904 [2024-05-15 05:31:44.667196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.905 [2024-05-15 05:31:44.667256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.905 [2024-05-15 05:31:44.667270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.905 [2024-05-15 05:31:44.667331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.905 [2024-05-15 05:31:44.667345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.164 NEW_FUNC[1/686]: 0x490900 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:55.164 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.164 #6 NEW cov: 11821 ft: 11822 corp: 2/25b lim: 40 exec/s: 0 rss: 70Mb L: 24/24 MS: 4 CMP-CrossOver-CMP-InsertRepeatedBytes- DE: "\000\""-"v\000\000\000"- 00:06:55.164 [2024-05-15 05:31:44.977767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:44.977801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.164 [2024-05-15 05:31:44.977862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:44.977876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.164 #7 NEW cov: 11951 ft: 12702 corp: 3/48b lim: 40 exec/s: 0 rss: 70Mb L: 23/24 MS: 1 EraseBytes- 00:06:55.164 [2024-05-15 05:31:45.027975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.028004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.164 [2024-05-15 05:31:45.028065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.028079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.164 [2024-05-15 05:31:45.028137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.028152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.164 #8 NEW cov: 11957 ft: 13038 corp: 4/72b lim: 40 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 ShuffleBytes- 00:06:55.164 [2024-05-15 05:31:45.067900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.067926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.164 [2024-05-15 05:31:45.067985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.067999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.164 #9 NEW cov: 12042 ft: 13275 corp: 5/95b lim: 40 exec/s: 0 rss: 70Mb L: 23/24 MS: 1 PersAutoDict- DE: "v\000\000\000"- 00:06:55.164 [2024-05-15 05:31:45.118074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7600ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.118101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.164 [2024-05-15 05:31:45.118160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.118175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.164 #10 NEW cov: 12042 ft: 13357 corp: 6/114b lim: 40 exec/s: 0 rss: 70Mb L: 19/24 MS: 1 CrossOver- 00:06:55.164 [2024-05-15 05:31:45.158175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76800000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.158202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.164 [2024-05-15 05:31:45.158262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.164 [2024-05-15 05:31:45.158280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.424 #13 NEW cov: 12042 ft: 13429 corp: 7/131b lim: 40 exec/s: 0 rss: 70Mb L: 17/24 MS: 3 CrossOver-ChangeBinInt-CrossOver- 00:06:55.424 [2024-05-15 05:31:45.208466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.424 [2024-05-15 05:31:45.208492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.208552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.208565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.208623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.208637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.425 #14 NEW cov: 12042 ft: 13530 corp: 8/155b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 CrossOver- 00:06:55.425 [2024-05-15 05:31:45.258657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.258683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.258744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000a00ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.258758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.258818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.258832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.425 #15 NEW cov: 12042 ft: 13562 corp: 9/179b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 CMP- DE: "\012\000"- 00:06:55.425 [2024-05-15 05:31:45.308780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.308806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.308868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:000affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.308882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.308940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.308954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.425 #16 NEW cov: 12042 ft: 13574 corp: 10/203b lim: 40 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 ShuffleBytes- 00:06:55.425 [2024-05-15 05:31:45.358741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.358767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.358832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.358846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.425 #17 NEW cov: 12042 ft: 13624 corp: 11/224b lim: 40 exec/s: 0 rss: 71Mb L: 21/24 MS: 1 EraseBytes- 00:06:55.425 [2024-05-15 05:31:45.398859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.398885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.398943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.398957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.425 #18 NEW cov: 12042 ft: 13653 corp: 12/247b lim: 40 exec/s: 0 rss: 71Mb L: 23/24 MS: 1 ChangeBinInt- 00:06:55.425 [2024-05-15 05:31:45.439025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.439051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.425 [2024-05-15 05:31:45.439110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000a00ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.425 [2024-05-15 05:31:45.439124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.684 #19 NEW cov: 12042 ft: 13718 corp: 13/268b lim: 40 exec/s: 0 rss: 71Mb L: 21/24 MS: 1 PersAutoDict- DE: "\012\000"- 00:06:55.684 [2024-05-15 05:31:45.489278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.489304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.684 [2024-05-15 05:31:45.489366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.489384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.684 [2024-05-15 05:31:45.489442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.489456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.684 #20 NEW cov: 12042 ft: 13745 corp: 14/299b lim: 40 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 CMP- DE: "\001\205\316\3044\177z\316"- 00:06:55.684 [2024-05-15 05:31:45.529365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7600ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.529393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.684 [2024-05-15 05:31:45.529469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffebebeb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.529483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.684 [2024-05-15 05:31:45.529543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ebebebeb cdw11:ebebebeb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.529560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.684 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:55.684 #21 NEW cov: 12065 ft: 13839 corp: 15/330b lim: 40 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:55.684 [2024-05-15 05:31:45.579364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7600ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.579394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.684 [2024-05-15 05:31:45.579455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffeb cdw11:ebebebeb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.579469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.684 #22 NEW cov: 12065 ft: 13885 corp: 16/352b lim: 40 exec/s: 0 rss: 72Mb L: 22/31 MS: 1 EraseBytes- 00:06:55.684 [2024-05-15 05:31:45.629509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7600ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.629536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.684 [2024-05-15 05:31:45.629595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.684 [2024-05-15 05:31:45.629610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.684 #23 NEW cov: 12065 ft: 13916 corp: 17/371b lim: 40 exec/s: 23 rss: 72Mb L: 19/31 MS: 1 ChangeBit- 00:06:55.684 [2024-05-15 05:31:45.669744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7600ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.685 [2024-05-15 05:31:45.669770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.685 [2024-05-15 05:31:45.669831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.685 [2024-05-15 05:31:45.669844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.685 [2024-05-15 05:31:45.669900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.685 [2024-05-15 05:31:45.669913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.685 #24 NEW cov: 12065 ft: 13940 corp: 18/395b lim: 40 exec/s: 24 rss: 72Mb L: 24/31 MS: 1 CrossOver- 00:06:55.945 [2024-05-15 05:31:45.709596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.709622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.945 #25 NEW cov: 12065 ft: 14684 corp: 19/409b lim: 40 exec/s: 25 rss: 72Mb L: 14/31 MS: 1 CrossOver- 00:06:55.945 [2024-05-15 05:31:45.749653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.749679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.945 #26 NEW cov: 12065 ft: 14708 corp: 20/419b lim: 40 exec/s: 26 rss: 72Mb L: 10/31 MS: 1 EraseBytes- 00:06:55.945 [2024-05-15 05:31:45.790053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:02020202 cdw11:02020202 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.790079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.790141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:02020202 cdw11:02020202 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.790155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.790217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:02020202 cdw11:02020202 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.790230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.945 #28 NEW cov: 12065 ft: 14716 corp: 21/446b lim: 40 exec/s: 28 rss: 72Mb L: 27/31 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:55.945 [2024-05-15 05:31:45.830085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.830111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.830171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.830185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.945 #29 NEW cov: 12065 ft: 14723 corp: 22/469b lim: 40 exec/s: 29 rss: 72Mb L: 23/31 MS: 1 ChangeBit- 00:06:55.945 [2024-05-15 05:31:45.870334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.870359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.870432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.870446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.870502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.870516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.945 #30 NEW cov: 12065 ft: 14728 corp: 23/493b lim: 40 exec/s: 30 rss: 72Mb L: 24/31 MS: 1 ShuffleBytes- 00:06:55.945 [2024-05-15 05:31:45.910456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a7a00ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.910482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.910545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.910560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.945 [2024-05-15 05:31:45.910615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.945 [2024-05-15 05:31:45.910629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.945 #31 NEW cov: 12065 ft: 14746 corp: 24/517b lim: 40 exec/s: 31 rss: 72Mb L: 24/31 MS: 1 ChangeBinInt- 00:06:55.946 [2024-05-15 05:31:45.960666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.946 [2024-05-15 05:31:45.960692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.946 [2024-05-15 05:31:45.960755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.946 [2024-05-15 05:31:45.960770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.946 [2024-05-15 05:31:45.960832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.946 [2024-05-15 05:31:45.960847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.206 #32 NEW cov: 12065 ft: 14794 corp: 25/548b lim: 40 exec/s: 32 rss: 72Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:56.206 [2024-05-15 05:31:46.010657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76ffffff cdw11:ffff00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.010682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.010742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.010756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.206 #33 NEW cov: 12065 ft: 14827 corp: 26/565b lim: 40 exec/s: 33 rss: 72Mb L: 17/31 MS: 1 CrossOver- 00:06:56.206 [2024-05-15 05:31:46.061099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.061125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.061186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffff3a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.061200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.061261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.061275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.061336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0185cec4 cdw11:347f7ace SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.061361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.206 #34 NEW cov: 12065 ft: 15130 corp: 27/597b lim: 40 exec/s: 34 rss: 72Mb L: 32/32 MS: 1 InsertByte- 00:06:56.206 [2024-05-15 05:31:46.110922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.110948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.111007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.111022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.206 #35 NEW cov: 12065 ft: 15169 corp: 28/620b lim: 40 exec/s: 35 rss: 72Mb L: 23/32 MS: 1 ChangeBinInt- 00:06:56.206 [2024-05-15 05:31:46.150963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76800000 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.150988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.151048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.151062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.206 #36 NEW cov: 12065 ft: 15173 corp: 29/637b lim: 40 exec/s: 36 rss: 72Mb L: 17/32 MS: 1 ChangeBinInt- 00:06:56.206 [2024-05-15 05:31:46.191091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76800000 cdw11:0b000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.191116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.206 [2024-05-15 05:31:46.191176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff2fffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.206 [2024-05-15 05:31:46.191190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.206 #37 NEW cov: 12065 ft: 15204 corp: 30/654b lim: 40 exec/s: 37 rss: 72Mb L: 17/32 MS: 1 ChangeByte- 00:06:56.467 [2024-05-15 05:31:46.241474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:00220000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.241499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.241561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.241575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.241633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.241648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.467 #38 NEW cov: 12065 ft: 15208 corp: 31/678b lim: 40 exec/s: 38 rss: 73Mb L: 24/32 MS: 1 PersAutoDict- DE: "\000\""- 00:06:56.467 [2024-05-15 05:31:46.281405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76ffffff cdw11:ff1fff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.281430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.281491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.281505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.467 #39 NEW cov: 12065 ft: 15221 corp: 32/696b lim: 40 exec/s: 39 rss: 73Mb L: 18/32 MS: 1 InsertByte- 00:06:56.467 [2024-05-15 05:31:46.331874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.331900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.331967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.331981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.332040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.332054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.332111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.332124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.467 #40 NEW cov: 12065 ft: 15238 corp: 33/735b lim: 40 exec/s: 40 rss: 73Mb L: 39/39 MS: 1 CopyPart- 00:06:56.467 [2024-05-15 05:31:46.371633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:09090909 cdw11:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.371659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.371718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:09090909 cdw11:09090909 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.371732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.467 #41 NEW cov: 12065 ft: 15251 corp: 34/752b lim: 40 exec/s: 41 rss: 73Mb L: 17/39 MS: 1 InsertRepeatedBytes- 00:06:56.467 [2024-05-15 05:31:46.411895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.411921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.467 [2024-05-15 05:31:46.411983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.467 [2024-05-15 05:31:46.411998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.468 [2024-05-15 05:31:46.412055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.468 [2024-05-15 05:31:46.412067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.468 #42 NEW cov: 12065 ft: 15254 corp: 35/783b lim: 40 exec/s: 42 rss: 73Mb L: 31/39 MS: 1 ShuffleBytes- 00:06:56.468 [2024-05-15 05:31:46.451821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:76000000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.468 [2024-05-15 05:31:46.451847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.468 [2024-05-15 05:31:46.451907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.468 [2024-05-15 05:31:46.451922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.468 #43 NEW cov: 12065 ft: 15266 corp: 36/802b lim: 40 exec/s: 43 rss: 73Mb L: 19/39 MS: 1 EraseBytes- 00:06:56.727 [2024-05-15 05:31:46.492307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.727 [2024-05-15 05:31:46.492334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.727 [2024-05-15 05:31:46.492403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.727 [2024-05-15 05:31:46.492417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.727 [2024-05-15 05:31:46.492475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.727 [2024-05-15 05:31:46.492489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.727 [2024-05-15 05:31:46.492558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.492572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.728 #44 NEW cov: 12065 ft: 15278 corp: 37/841b lim: 40 exec/s: 44 rss: 73Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:56.728 [2024-05-15 05:31:46.542116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.542142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.728 [2024-05-15 05:31:46.542202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:290a00ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.542216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.728 #45 NEW cov: 12065 ft: 15293 corp: 38/862b lim: 40 exec/s: 45 rss: 73Mb L: 21/39 MS: 1 ChangeByte- 00:06:56.728 [2024-05-15 05:31:46.592553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.592579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.728 [2024-05-15 05:31:46.592636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.592650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.728 [2024-05-15 05:31:46.592707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.592721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.728 [2024-05-15 05:31:46.592779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.592792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.728 #46 NEW cov: 12065 ft: 15341 corp: 39/901b lim: 40 exec/s: 46 rss: 73Mb L: 39/39 MS: 1 ShuffleBytes- 00:06:56.728 [2024-05-15 05:31:46.642549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:7600ffff cdw11:ff760000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.642575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.728 [2024-05-15 05:31:46.642638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.642655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.728 [2024-05-15 05:31:46.642717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ff0022 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.728 [2024-05-15 05:31:46.642731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.728 #47 NEW cov: 12065 ft: 15380 corp: 40/927b lim: 40 exec/s: 23 rss: 73Mb L: 26/39 MS: 1 PersAutoDict- DE: "\000\""- 00:06:56.728 #47 DONE cov: 12065 ft: 15380 corp: 40/927b lim: 40 exec/s: 23 rss: 73Mb 00:06:56.728 ###### Recommended dictionary. ###### 00:06:56.728 "\000\"" # Uses: 2 00:06:56.728 "v\000\000\000" # Uses: 1 00:06:56.728 "\012\000" # Uses: 1 00:06:56.728 "\001\205\316\3044\177z\316" # Uses: 0 00:06:56.728 ###### End of recommended dictionary. ###### 00:06:56.728 Done 47 runs in 2 second(s) 00:06:56.728 [2024-05-15 05:31:46.671862] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.987 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.988 05:31:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:56.988 [2024-05-15 05:31:46.841015] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:56.988 [2024-05-15 05:31:46.841091] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271578 ] 00:06:56.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.247 [2024-05-15 05:31:47.020755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.247 [2024-05-15 05:31:47.092127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.247 [2024-05-15 05:31:47.151564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.247 [2024-05-15 05:31:47.167495] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:57.247 [2024-05-15 05:31:47.167884] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:57.247 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.247 INFO: Seed: 1097475519 00:06:57.247 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:57.247 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:57.247 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:57.247 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.247 #2 INITED exec/s: 0 rss: 63Mb 00:06:57.247 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.247 This may also happen if the target rejected all inputs we tried so far 00:06:57.247 [2024-05-15 05:31:47.223434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.247 [2024-05-15 05:31:47.223463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.247 [2024-05-15 05:31:47.223524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.247 [2024-05-15 05:31:47.223537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.247 [2024-05-15 05:31:47.223592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.247 [2024-05-15 05:31:47.223607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.506 NEW_FUNC[1/686]: 0x492670 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:57.506 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.507 #3 NEW cov: 11817 ft: 11809 corp: 2/30b lim: 40 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:57.766 [2024-05-15 05:31:47.534289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.766 [2024-05-15 05:31:47.534330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.766 [2024-05-15 05:31:47.534410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.766 [2024-05-15 05:31:47.534439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.766 [2024-05-15 05:31:47.534512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.534531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.767 #14 NEW cov: 11949 ft: 12400 corp: 3/59b lim: 40 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 CopyPart- 00:06:57.767 [2024-05-15 05:31:47.584303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.584329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.584386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.584400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.584460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.584473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.767 #20 NEW cov: 11955 ft: 12703 corp: 4/89b lim: 40 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CrossOver- 00:06:57.767 [2024-05-15 05:31:47.634388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.634414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.634480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.634494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.634547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.634560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.767 #23 NEW cov: 12040 ft: 12932 corp: 5/116b lim: 40 exec/s: 0 rss: 70Mb L: 27/30 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:57.767 [2024-05-15 05:31:47.674571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.674596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.674654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.674668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.674723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:da000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.674737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.767 #25 NEW cov: 12040 ft: 13068 corp: 6/141b lim: 40 exec/s: 0 rss: 70Mb L: 25/30 MS: 2 InsertRepeatedBytes-CrossOver- 00:06:57.767 [2024-05-15 05:31:47.714655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.714681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.714739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.714753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.714807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.714821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.767 #31 NEW cov: 12040 ft: 13124 corp: 7/170b lim: 40 exec/s: 0 rss: 70Mb L: 29/30 MS: 1 ChangeBit- 00:06:57.767 [2024-05-15 05:31:47.754936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.754965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.755022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.755036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.755089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.755103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.767 [2024-05-15 05:31:47.755159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.767 [2024-05-15 05:31:47.755173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.767 #32 NEW cov: 12040 ft: 13474 corp: 8/205b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:58.027 [2024-05-15 05:31:47.805079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.805105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.805163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.805176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.805234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.805249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.805304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.805317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.027 #33 NEW cov: 12040 ft: 13541 corp: 9/240b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:06:58.027 [2024-05-15 05:31:47.855059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.855084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.855141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.855155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.855209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadabada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.855222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.027 #34 NEW cov: 12040 ft: 13606 corp: 10/271b lim: 40 exec/s: 0 rss: 71Mb L: 31/35 MS: 1 InsertByte- 00:06:58.027 [2024-05-15 05:31:47.895319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ddadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.895347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.895408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.895422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.895475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.895488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.027 [2024-05-15 05:31:47.895539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.027 [2024-05-15 05:31:47.895551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.028 #35 NEW cov: 12040 ft: 13677 corp: 11/306b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:06:58.028 [2024-05-15 05:31:47.945514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ddadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.945540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:47.945597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.945611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:47.945668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffefdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.945683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:47.945736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.945750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.028 #36 NEW cov: 12040 ft: 13719 corp: 12/341b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:06:58.028 [2024-05-15 05:31:47.995467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.995493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:47.995552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:da282525 cdw11:25dadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.995566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:47.995620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:da000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:47.995633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.028 #37 NEW cov: 12040 ft: 13750 corp: 13/366b lim: 40 exec/s: 0 rss: 71Mb L: 25/35 MS: 1 ChangeBinInt- 00:06:58.028 [2024-05-15 05:31:48.045561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0000da cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:48.045592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:48.045652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:48.045666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.028 [2024-05-15 05:31:48.045721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dada0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.028 [2024-05-15 05:31:48.045735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.290 #38 NEW cov: 12040 ft: 13816 corp: 14/397b lim: 40 exec/s: 0 rss: 71Mb L: 31/35 MS: 1 CrossOver- 00:06:58.290 [2024-05-15 05:31:48.095907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ddadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.290 [2024-05-15 05:31:48.095932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.290 [2024-05-15 05:31:48.095989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0a1e cdw11:250000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.290 [2024-05-15 05:31:48.096003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.290 [2024-05-15 05:31:48.096061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.290 [2024-05-15 05:31:48.096075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.290 [2024-05-15 05:31:48.096129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.290 [2024-05-15 05:31:48.096143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.290 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:58.290 #39 NEW cov: 12063 ft: 13862 corp: 15/432b lim: 40 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:58.291 [2024-05-15 05:31:48.135859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.135885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.135940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.135954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.136011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadad0 cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.136024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.291 #40 NEW cov: 12063 ft: 13875 corp: 16/461b lim: 40 exec/s: 0 rss: 71Mb L: 29/35 MS: 1 ChangeBinInt- 00:06:58.291 [2024-05-15 05:31:48.175914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.175940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.176001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada1f00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.176015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.176071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000bada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.176085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.291 #41 NEW cov: 12063 ft: 13896 corp: 17/492b lim: 40 exec/s: 0 rss: 71Mb L: 31/35 MS: 1 ChangeBinInt- 00:06:58.291 [2024-05-15 05:31:48.216246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.216272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.216332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.216347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.216405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.216420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.216475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.216488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.291 #42 NEW cov: 12063 ft: 13933 corp: 18/527b lim: 40 exec/s: 42 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:06:58.291 [2024-05-15 05:31:48.256297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.256323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.256385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.256399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.256455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffdadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.256469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.256525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.256538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.291 #43 NEW cov: 12063 ft: 13990 corp: 19/565b lim: 40 exec/s: 43 rss: 71Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:58.291 [2024-05-15 05:31:48.306318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.306344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.306439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.306455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.291 [2024-05-15 05:31:48.306509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.291 [2024-05-15 05:31:48.306523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.550 #44 NEW cov: 12063 ft: 13999 corp: 20/594b lim: 40 exec/s: 44 rss: 72Mb L: 29/38 MS: 1 CopyPart- 00:06:58.550 [2024-05-15 05:31:48.346465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.550 [2024-05-15 05:31:48.346490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.550 [2024-05-15 05:31:48.346548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada2525 cdw11:da2825da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.550 [2024-05-15 05:31:48.346561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.550 [2024-05-15 05:31:48.346617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:da000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.346630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.551 #45 NEW cov: 12063 ft: 14017 corp: 21/619b lim: 40 exec/s: 45 rss: 72Mb L: 25/38 MS: 1 ShuffleBytes- 00:06:58.551 [2024-05-15 05:31:48.396716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ddadada cdw11:dadadaff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.396741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.396801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffdada cdw11:da0adada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.396814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.396868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:efdadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.396882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.396938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadaff cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.396950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.551 #46 NEW cov: 12063 ft: 14025 corp: 22/657b lim: 40 exec/s: 46 rss: 72Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:06:58.551 [2024-05-15 05:31:48.446913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ddadada cdw11:dadadaff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.446940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.446998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffdada cdw11:da0adada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.447012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.447071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:efdadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.447084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.447140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadaff cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.447153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.551 #47 NEW cov: 12063 ft: 14038 corp: 23/695b lim: 40 exec/s: 47 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:06:58.551 [2024-05-15 05:31:48.496851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.496876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.496933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:da282525 cdw11:25dadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.496946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.497003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:da25dada cdw11:dada0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.497017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.551 #48 NEW cov: 12063 ft: 14044 corp: 24/721b lim: 40 exec/s: 48 rss: 72Mb L: 26/38 MS: 1 CopyPart- 00:06:58.551 [2024-05-15 05:31:48.537105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.537130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.537186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadedada cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.537199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.537254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffdadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.537268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.551 [2024-05-15 05:31:48.537323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.551 [2024-05-15 05:31:48.537336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.551 #49 NEW cov: 12063 ft: 14062 corp: 25/759b lim: 40 exec/s: 49 rss: 72Mb L: 38/38 MS: 1 ChangeBit- 00:06:58.809 [2024-05-15 05:31:48.587111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.809 [2024-05-15 05:31:48.587137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.809 [2024-05-15 05:31:48.587198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.809 [2024-05-15 05:31:48.587212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.809 [2024-05-15 05:31:48.587273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.809 [2024-05-15 05:31:48.587287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.809 #50 NEW cov: 12063 ft: 14091 corp: 26/788b lim: 40 exec/s: 50 rss: 72Mb L: 29/38 MS: 1 ChangeBit- 00:06:58.809 [2024-05-15 05:31:48.637451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.637476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.637536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:fcffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.637551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.637607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffdada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.637621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.637677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.637691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:58.810 #51 NEW cov: 12063 ft: 14106 corp: 27/827b lim: 40 exec/s: 51 rss: 72Mb L: 39/39 MS: 1 InsertByte- 00:06:58.810 [2024-05-15 05:31:48.677365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadad9 cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.677394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.677451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada1f00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.677465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.677521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0000bada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.677535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.810 #52 NEW cov: 12063 ft: 14165 corp: 28/858b lim: 40 exec/s: 52 rss: 72Mb L: 31/39 MS: 1 ChangeBinInt- 00:06:58.810 [2024-05-15 05:31:48.727557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.727582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.727641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.727655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.727712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.727729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.810 #53 NEW cov: 12063 ft: 14172 corp: 29/887b lim: 40 exec/s: 53 rss: 72Mb L: 29/39 MS: 1 ChangeBit- 00:06:58.810 [2024-05-15 05:31:48.777688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.777713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.777774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.777787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.777846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadae4da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.777860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.810 #54 NEW cov: 12063 ft: 14178 corp: 30/916b lim: 40 exec/s: 54 rss: 72Mb L: 29/39 MS: 1 ChangeBinInt- 00:06:58.810 [2024-05-15 05:31:48.817777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadad9 cdw11:dada0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.817802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.817863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000000ba cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.817877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.810 [2024-05-15 05:31:48.817932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:58.810 [2024-05-15 05:31:48.817945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.069 #55 NEW cov: 12063 ft: 14183 corp: 31/940b lim: 40 exec/s: 55 rss: 72Mb L: 24/39 MS: 1 EraseBytes- 00:06:59.069 [2024-05-15 05:31:48.868143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.868169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.069 [2024-05-15 05:31:48.868223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d7da0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.868237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.069 [2024-05-15 05:31:48.868292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.868307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.069 [2024-05-15 05:31:48.868364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.868383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.069 #56 NEW cov: 12063 ft: 14221 corp: 32/975b lim: 40 exec/s: 56 rss: 72Mb L: 35/39 MS: 1 ChangeByte- 00:06:59.069 [2024-05-15 05:31:48.908176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.908204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.069 [2024-05-15 05:31:48.908263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadada0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.908277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.069 [2024-05-15 05:31:48.908332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.069 [2024-05-15 05:31:48.908347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.069 [2024-05-15 05:31:48.908404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.908418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.070 #57 NEW cov: 12063 ft: 14231 corp: 33/1010b lim: 40 exec/s: 57 rss: 72Mb L: 35/39 MS: 1 CrossOver- 00:06:59.070 [2024-05-15 05:31:48.948328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.948352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.948422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.948437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.948492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.948505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.948559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffdadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.948571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.998463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.998488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.998545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.998559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.998614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffdadada cdw11:da230000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.998627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:48.998682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00dadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:48.998696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.070 #59 NEW cov: 12063 ft: 14238 corp: 34/1045b lim: 40 exec/s: 59 rss: 72Mb L: 35/39 MS: 2 CopyPart-ChangeBinInt- 00:06:59.070 [2024-05-15 05:31:49.038581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2ddadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.038606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:49.038665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.038679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:49.038733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffdada cdw11:dadadaff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.038746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:49.038802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.038815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.070 #60 NEW cov: 12063 ft: 14242 corp: 35/1080b lim: 40 exec/s: 60 rss: 72Mb L: 35/39 MS: 1 ShuffleBytes- 00:06:59.070 [2024-05-15 05:31:49.078494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadad9 cdw11:dadada0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.078519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:49.078582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.078595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.070 [2024-05-15 05:31:49.078650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.070 [2024-05-15 05:31:49.078664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.329 #61 NEW cov: 12063 ft: 14244 corp: 36/1104b lim: 40 exec/s: 61 rss: 72Mb L: 24/39 MS: 1 CrossOver- 00:06:59.329 [2024-05-15 05:31:49.118778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.118804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.118861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dada0ada cdw11:daffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.118875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.118930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.118945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.119000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadaffda cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.119014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.329 #62 NEW cov: 12063 ft: 14280 corp: 37/1141b lim: 40 exec/s: 62 rss: 72Mb L: 37/39 MS: 1 CopyPart- 00:06:59.329 [2024-05-15 05:31:49.158630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.158656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.158716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.158730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.329 #63 NEW cov: 12063 ft: 14522 corp: 38/1162b lim: 40 exec/s: 63 rss: 72Mb L: 21/39 MS: 1 EraseBytes- 00:06:59.329 [2024-05-15 05:31:49.199060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0adadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.199086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.199144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:da000000 cdw11:00dadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.199158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.199210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dadada0a cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.199224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.329 [2024-05-15 05:31:49.199280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:59.329 [2024-05-15 05:31:49.199293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.329 #64 pulse cov: 12063 ft: 14532 corp: 38/1162b lim: 40 exec/s: 32 rss: 73Mb 00:06:59.329 #64 NEW cov: 12063 ft: 14532 corp: 39/1201b lim: 40 exec/s: 32 rss: 73Mb L: 39/39 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:59.329 #64 DONE cov: 12063 ft: 14532 corp: 39/1201b lim: 40 exec/s: 32 rss: 73Mb 00:06:59.329 ###### Recommended dictionary. ###### 00:06:59.329 "\000\000\000\000" # Uses: 0 00:06:59.329 ###### End of recommended dictionary. ###### 00:06:59.329 Done 64 runs in 2 second(s) 00:06:59.329 [2024-05-15 05:31:49.228766] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:59.329 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.589 05:31:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:59.589 [2024-05-15 05:31:49.396503] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:59.589 [2024-05-15 05:31:49.396573] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271933 ] 00:06:59.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.589 [2024-05-15 05:31:49.578186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.848 [2024-05-15 05:31:49.645787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.848 [2024-05-15 05:31:49.704987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.848 [2024-05-15 05:31:49.720939] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:59.848 [2024-05-15 05:31:49.721345] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:59.848 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.848 INFO: Seed: 3650498911 00:06:59.848 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:06:59.848 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:06:59.848 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:59.848 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.848 #2 INITED exec/s: 0 rss: 64Mb 00:06:59.848 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.848 This may also happen if the target rejected all inputs we tried so far 00:06:59.848 [2024-05-15 05:31:49.798373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.848 [2024-05-15 05:31:49.798414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.848 [2024-05-15 05:31:49.798557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.848 [2024-05-15 05:31:49.798577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.848 [2024-05-15 05:31:49.798721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.848 [2024-05-15 05:31:49.798738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.848 [2024-05-15 05:31:49.798882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:59.848 [2024-05-15 05:31:49.798901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.107 NEW_FUNC[1/683]: 0x494230 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:00.107 NEW_FUNC[2/683]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.107 #18 NEW cov: 11781 ft: 11792 corp: 2/40b lim: 40 exec/s: 0 rss: 70Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:00.367 [2024-05-15 05:31:50.138744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.138787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.367 [2024-05-15 05:31:50.138915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.138933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.367 [2024-05-15 05:31:50.139066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.139084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.367 NEW_FUNC[1/2]: 0x1a513d0 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:712 00:07:00.367 NEW_FUNC[2/2]: 0x1d7fba0 in thread_execute_poller /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:938 00:07:00.367 #20 NEW cov: 11937 ft: 12754 corp: 3/66b lim: 40 exec/s: 0 rss: 70Mb L: 26/39 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:00.367 [2024-05-15 05:31:50.199267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.199294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.367 [2024-05-15 05:31:50.199422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.199440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.367 [2024-05-15 05:31:50.199561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.199577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.367 [2024-05-15 05:31:50.199711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.199727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.367 [2024-05-15 05:31:50.199862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.199879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.367 #21 NEW cov: 11943 ft: 13109 corp: 4/106b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:00.367 [2024-05-15 05:31:50.249417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.367 [2024-05-15 05:31:50.249444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.249574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.249589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.249717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffc4ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.249732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.249860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.249876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.250008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.250023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.368 #22 NEW cov: 12028 ft: 13497 corp: 5/146b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeByte- 00:07:00.368 [2024-05-15 05:31:50.299365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.299395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.299520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.299539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.299676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.299698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.299825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.299840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.368 #26 NEW cov: 12028 ft: 13620 corp: 6/185b lim: 40 exec/s: 0 rss: 71Mb L: 39/40 MS: 4 CopyPart-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:00.368 [2024-05-15 05:31:50.349455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.349483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.349633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.349650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.349776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.349794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.368 [2024-05-15 05:31:50.349934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.368 [2024-05-15 05:31:50.349951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.368 #27 NEW cov: 12028 ft: 13698 corp: 7/224b lim: 40 exec/s: 0 rss: 71Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:00.628 [2024-05-15 05:31:50.409625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.409653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.409788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00006565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.409806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.409942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.409959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.410088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:65000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.410106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.628 #28 NEW cov: 12028 ft: 13764 corp: 8/263b lim: 40 exec/s: 0 rss: 71Mb L: 39/40 MS: 1 CrossOver- 00:07:00.628 [2024-05-15 05:31:50.459856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.459884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.460030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.460047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.460176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.460193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.460318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.460336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.628 #29 NEW cov: 12028 ft: 13807 corp: 9/302b lim: 40 exec/s: 0 rss: 71Mb L: 39/40 MS: 1 CopyPart- 00:07:00.628 [2024-05-15 05:31:50.510112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.510140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.510267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.510286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.510424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.510449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.510580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.510598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.510724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:002a0a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.510741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.628 #30 NEW cov: 12028 ft: 13913 corp: 10/342b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:07:00.628 [2024-05-15 05:31:50.550267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.550294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.550425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.550442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.550575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.550591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.550716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.550733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.550859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.550873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.628 #31 NEW cov: 12028 ft: 13959 corp: 11/382b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:07:00.628 [2024-05-15 05:31:50.589455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.589482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.628 #32 NEW cov: 12028 ft: 14325 corp: 12/391b lim: 40 exec/s: 0 rss: 71Mb L: 9/40 MS: 1 CrossOver- 00:07:00.628 [2024-05-15 05:31:50.630429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0afff7ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.630465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.630599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.630619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.630745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.630764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.630888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.630904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.628 [2024-05-15 05:31:50.631039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.628 [2024-05-15 05:31:50.631054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.888 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:00.888 #33 NEW cov: 12051 ft: 14336 corp: 13/431b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBit- 00:07:00.888 [2024-05-15 05:31:50.679769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a0a0000 cdw11:009f0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.679796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.888 #44 NEW cov: 12051 ft: 14358 corp: 14/441b lim: 40 exec/s: 0 rss: 71Mb L: 10/40 MS: 1 InsertByte- 00:07:00.888 [2024-05-15 05:31:50.740314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.740340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.740469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.740487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.740608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.740627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.888 #45 NEW cov: 12051 ft: 14417 corp: 15/467b lim: 40 exec/s: 45 rss: 72Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:00.888 [2024-05-15 05:31:50.790957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.790984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.791110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff09 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.791126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.791256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.791273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.791431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.791447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.791582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.791599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.888 #46 NEW cov: 12051 ft: 14507 corp: 16/507b lim: 40 exec/s: 46 rss: 72Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:00.888 [2024-05-15 05:31:50.830420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a0a0000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.830447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.830590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffc4ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.830607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.830738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.830756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.830887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:9f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.830904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.888 #47 NEW cov: 12051 ft: 14535 corp: 17/540b lim: 40 exec/s: 47 rss: 72Mb L: 33/40 MS: 1 CrossOver- 00:07:00.888 [2024-05-15 05:31:50.891167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.891195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.891343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.891361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.891490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.891509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.891633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.891651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.888 [2024-05-15 05:31:50.891772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.888 [2024-05-15 05:31:50.891789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.148 #48 NEW cov: 12051 ft: 14540 corp: 18/580b lim: 40 exec/s: 48 rss: 72Mb L: 40/40 MS: 1 CrossOver- 00:07:01.148 [2024-05-15 05:31:50.930487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.930513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:50.930662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:6565653d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.930681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:50.930813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.930832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.148 #49 NEW cov: 12051 ft: 14569 corp: 19/607b lim: 40 exec/s: 49 rss: 72Mb L: 27/40 MS: 1 InsertByte- 00:07:01.148 [2024-05-15 05:31:50.971423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.971450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:50.971580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.971597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:50.971728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.971744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:50.971873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.971890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:50.972023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:50.972039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.148 #50 NEW cov: 12051 ft: 14615 corp: 20/647b lim: 40 exec/s: 50 rss: 72Mb L: 40/40 MS: 1 CMP- DE: "\000\000\000\012"- 00:07:01.148 [2024-05-15 05:31:51.021367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.021399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.021525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:00000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.021542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.021668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.021684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.021811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.021828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.021956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.021973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.148 #51 NEW cov: 12051 ft: 14634 corp: 21/687b lim: 40 exec/s: 51 rss: 72Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\000\000\012"- 00:07:01.148 [2024-05-15 05:31:51.071680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.071711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.071835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.071852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.071981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffc4ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.071998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.072120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.072137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.072259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.072275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.148 #52 NEW cov: 12051 ft: 14709 corp: 22/727b lim: 40 exec/s: 52 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:01.148 [2024-05-15 05:31:51.121289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.121318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.121452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000657a cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.121471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.121602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.121620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.148 [2024-05-15 05:31:51.121747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:65000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.148 [2024-05-15 05:31:51.121765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.148 #53 NEW cov: 12051 ft: 14735 corp: 23/766b lim: 40 exec/s: 53 rss: 72Mb L: 39/40 MS: 1 ChangeByte- 00:07:01.416 [2024-05-15 05:31:51.181370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.416 [2024-05-15 05:31:51.181407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.416 [2024-05-15 05:31:51.181537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.416 [2024-05-15 05:31:51.181555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.416 #54 NEW cov: 12051 ft: 14932 corp: 24/789b lim: 40 exec/s: 54 rss: 72Mb L: 23/40 MS: 1 EraseBytes- 00:07:01.417 [2024-05-15 05:31:51.231035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.231064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.231185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.231202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.417 #55 NEW cov: 12051 ft: 14947 corp: 25/810b lim: 40 exec/s: 55 rss: 72Mb L: 21/40 MS: 1 EraseBytes- 00:07:01.417 [2024-05-15 05:31:51.281357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.281391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.281532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656529 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.281550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.281671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3d656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.281688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.417 #56 NEW cov: 12051 ft: 14950 corp: 26/838b lim: 40 exec/s: 56 rss: 72Mb L: 28/40 MS: 1 InsertByte- 00:07:01.417 [2024-05-15 05:31:51.331986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.332014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.332149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656529 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.332166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.332299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:3d65655e cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.332316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.417 #57 NEW cov: 12051 ft: 14954 corp: 27/866b lim: 40 exec/s: 57 rss: 73Mb L: 28/40 MS: 1 ChangeBinInt- 00:07:01.417 [2024-05-15 05:31:51.382498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.382526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.382662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:00000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.382680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.382809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.382827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.382973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.382991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.383126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.383143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.417 #58 NEW cov: 12051 ft: 14970 corp: 28/906b lim: 40 exec/s: 58 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:01.417 [2024-05-15 05:31:51.422200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.422228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.422361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.422378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.417 [2024-05-15 05:31:51.422511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.417 [2024-05-15 05:31:51.422528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.681 #59 NEW cov: 12051 ft: 15014 corp: 29/932b lim: 40 exec/s: 59 rss: 73Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:01.681 [2024-05-15 05:31:51.471945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.471974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 #60 NEW cov: 12051 ft: 15023 corp: 30/945b lim: 40 exec/s: 60 rss: 73Mb L: 13/40 MS: 1 CopyPart- 00:07:01.681 [2024-05-15 05:31:51.522393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.522423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.522555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.522573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.681 #61 NEW cov: 12051 ft: 15031 corp: 31/965b lim: 40 exec/s: 61 rss: 73Mb L: 20/40 MS: 1 EraseBytes- 00:07:01.681 [2024-05-15 05:31:51.562513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.562541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.562673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00006565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.562692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.562830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:65656565 cdw11:65656565 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.562849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.562983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:65000000 cdw11:00000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.563004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.681 #62 NEW cov: 12051 ft: 15037 corp: 32/1004b lim: 40 exec/s: 62 rss: 73Mb L: 39/40 MS: 1 ChangeBit- 00:07:01.681 [2024-05-15 05:31:51.613265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.613293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.613417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.613445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.613566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.613583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.613712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.613728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.613860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:002a000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.613877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.681 #63 NEW cov: 12051 ft: 15069 corp: 33/1044b lim: 40 exec/s: 63 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:07:01.681 [2024-05-15 05:31:51.653364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.653393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.653518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.653537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.653655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.653672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.653800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.653816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.681 [2024-05-15 05:31:51.653945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.681 [2024-05-15 05:31:51.653963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.681 #64 NEW cov: 12051 ft: 15113 corp: 34/1084b lim: 40 exec/s: 64 rss: 73Mb L: 40/40 MS: 1 CrossOver- 00:07:01.940 [2024-05-15 05:31:51.703667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.703695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.703844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.703863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.703992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.704009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.704144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:b3ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.704162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.704293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.704310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.940 #65 NEW cov: 12051 ft: 15122 corp: 35/1124b lim: 40 exec/s: 65 rss: 73Mb L: 40/40 MS: 1 CMP- DE: "\377\377\000\263"- 00:07:01.940 [2024-05-15 05:31:51.743604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.743631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.743775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffff00 cdw11:00000a2c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.743791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.743930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.743963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.744095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.744113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.940 [2024-05-15 05:31:51.744244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.940 [2024-05-15 05:31:51.744261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.940 #66 NEW cov: 12051 ft: 15132 corp: 36/1164b lim: 40 exec/s: 33 rss: 73Mb L: 40/40 MS: 1 ChangeByte- 00:07:01.940 #66 DONE cov: 12051 ft: 15132 corp: 36/1164b lim: 40 exec/s: 33 rss: 73Mb 00:07:01.940 ###### Recommended dictionary. ###### 00:07:01.940 "\000\000\000\012" # Uses: 1 00:07:01.940 "\377\377\000\263" # Uses: 0 00:07:01.940 ###### End of recommended dictionary. ###### 00:07:01.940 Done 66 runs in 2 second(s) 00:07:01.940 [2024-05-15 05:31:51.772791] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.941 05:31:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:01.941 [2024-05-15 05:31:51.939708] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:01.941 [2024-05-15 05:31:51.939780] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272468 ] 00:07:02.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.200 [2024-05-15 05:31:52.124834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.200 [2024-05-15 05:31:52.189900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.459 [2024-05-15 05:31:52.249526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.459 [2024-05-15 05:31:52.265474] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:02.459 [2024-05-15 05:31:52.265837] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:02.459 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.459 INFO: Seed: 1900518979 00:07:02.459 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:02.459 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:02.459 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:02.459 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.459 #2 INITED exec/s: 0 rss: 64Mb 00:07:02.459 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.459 This may also happen if the target rejected all inputs we tried so far 00:07:02.459 [2024-05-15 05:31:52.321530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.459 [2024-05-15 05:31:52.321558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.459 [2024-05-15 05:31:52.321621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.459 [2024-05-15 05:31:52.321636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.459 [2024-05-15 05:31:52.321698] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.459 [2024-05-15 05:31:52.321712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.719 NEW_FUNC[1/685]: 0x495df0 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:02.719 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.719 #13 NEW cov: 11793 ft: 11794 corp: 2/28b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:07:02.719 [2024-05-15 05:31:52.653985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.654030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.719 [2024-05-15 05:31:52.654174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.654197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.719 [2024-05-15 05:31:52.654331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.654354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.719 [2024-05-15 05:31:52.654492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.654514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.719 [2024-05-15 05:31:52.654655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.654678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.719 NEW_FUNC[1/1]: 0x1a513d0 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:712 00:07:02.719 #14 NEW cov: 11931 ft: 12897 corp: 3/63b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:02.719 [2024-05-15 05:31:52.712956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.712986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.719 [2024-05-15 05:31:52.713114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.713132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.719 [2024-05-15 05:31:52.713271] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.719 [2024-05-15 05:31:52.713289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.719 #20 NEW cov: 11937 ft: 13080 corp: 4/90b lim: 35 exec/s: 0 rss: 70Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:02.979 [2024-05-15 05:31:52.764079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.764109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.764236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.764255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.764384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.764403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.764539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.764556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.764695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.764713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.979 #21 NEW cov: 12022 ft: 13356 corp: 5/125b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:02.979 [2024-05-15 05:31:52.813715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.813745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.813880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.813898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.814034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.814050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.979 #22 NEW cov: 12022 ft: 13418 corp: 6/151b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 EraseBytes- 00:07:02.979 [2024-05-15 05:31:52.874376] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.874409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.874545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.874562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.874696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.874715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.874848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.874864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.874997] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.875015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.979 #23 NEW cov: 12022 ft: 13485 corp: 7/186b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:02.979 [2024-05-15 05:31:52.923407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.923435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.979 #24 NEW cov: 12022 ft: 14267 corp: 8/197b lim: 35 exec/s: 0 rss: 70Mb L: 11/35 MS: 1 CrossOver- 00:07:02.979 [2024-05-15 05:31:52.964249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.964277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.964407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.964425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.964550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.964566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.964692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.964708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.979 [2024-05-15 05:31:52.964829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.979 [2024-05-15 05:31:52.964845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.979 #25 NEW cov: 12022 ft: 14320 corp: 9/232b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:03.239 [2024-05-15 05:31:53.014761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.014790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.014933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.014952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.015083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.015100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.015223] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.015241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.015372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.015392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.239 #26 NEW cov: 12022 ft: 14363 corp: 10/267b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:03.239 [2024-05-15 05:31:53.054526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.054553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.054687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.054703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.054828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.054846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.054980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.054998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.055122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.055139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.239 #27 NEW cov: 12022 ft: 14434 corp: 11/302b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:03.239 [2024-05-15 05:31:53.114939] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.114967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.115089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.115108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.115237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.115255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.115384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.115405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.115535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.115553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.239 #28 NEW cov: 12022 ft: 14451 corp: 12/337b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:03.239 [2024-05-15 05:31:53.155119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.155146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.155282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.155301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.155430] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.155447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.155584] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.155601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.155735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.155752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.239 #29 NEW cov: 12022 ft: 14551 corp: 13/372b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 CMP- DE: "\377\377\377\377\001,.\015"- 00:07:03.239 [2024-05-15 05:31:53.194696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.194723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.239 [2024-05-15 05:31:53.194864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.239 [2024-05-15 05:31:53.194883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.240 [2024-05-15 05:31:53.195013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.240 [2024-05-15 05:31:53.195029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.240 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:03.240 #30 NEW cov: 12045 ft: 14629 corp: 14/399b lim: 35 exec/s: 0 rss: 71Mb L: 27/35 MS: 1 ShuffleBytes- 00:07:03.240 [2024-05-15 05:31:53.245099] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.240 [2024-05-15 05:31:53.245127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.240 [2024-05-15 05:31:53.245253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.240 [2024-05-15 05:31:53.245277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.240 [2024-05-15 05:31:53.245407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.240 [2024-05-15 05:31:53.245423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.240 [2024-05-15 05:31:53.245552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.240 [2024-05-15 05:31:53.245568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.499 #31 NEW cov: 12052 ft: 14670 corp: 15/429b lim: 35 exec/s: 0 rss: 71Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:07:03.499 [2024-05-15 05:31:53.285520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.285546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.285683] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.285699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.285830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.285846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.285969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.285985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.286111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.286128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.499 #32 NEW cov: 12052 ft: 14688 corp: 16/464b lim: 35 exec/s: 32 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:03.499 [2024-05-15 05:31:53.335253] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.335280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.335412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.335441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.335571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.335586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.335717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.335734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.335855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.335872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.499 #33 NEW cov: 12052 ft: 14782 corp: 17/499b lim: 35 exec/s: 33 rss: 71Mb L: 35/35 MS: 1 CopyPart- 00:07:03.499 [2024-05-15 05:31:53.385806] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.385832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.385960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.385978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.386106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.386121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.386243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.499 [2024-05-15 05:31:53.386260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.499 [2024-05-15 05:31:53.386385] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.386402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.500 #34 NEW cov: 12052 ft: 14847 corp: 18/534b lim: 35 exec/s: 34 rss: 71Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:03.500 [2024-05-15 05:31:53.435937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.435963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.436088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000002c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.436107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.436230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.436246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.436366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.436383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.436511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.436528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.500 #35 NEW cov: 12052 ft: 14922 corp: 19/569b lim: 35 exec/s: 35 rss: 71Mb L: 35/35 MS: 1 ChangeByte- 00:07:03.500 [2024-05-15 05:31:53.485831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.485859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.486005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.486025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.486155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.486173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.500 [2024-05-15 05:31:53.486300] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.500 [2024-05-15 05:31:53.486317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.500 #36 NEW cov: 12052 ft: 14997 corp: 20/599b lim: 35 exec/s: 36 rss: 71Mb L: 30/35 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:03.759 [2024-05-15 05:31:53.535284] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.759 [2024-05-15 05:31:53.535314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.759 [2024-05-15 05:31:53.535458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.759 [2024-05-15 05:31:53.535475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.759 [2024-05-15 05:31:53.535606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.759 [2024-05-15 05:31:53.535622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.759 #37 NEW cov: 12052 ft: 15042 corp: 21/626b lim: 35 exec/s: 37 rss: 72Mb L: 27/35 MS: 1 ChangeBinInt- 00:07:03.759 [2024-05-15 05:31:53.585455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.759 [2024-05-15 05:31:53.585483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.759 [2024-05-15 05:31:53.585609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.759 [2024-05-15 05:31:53.585625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.760 NEW_FUNC[1/2]: 0x4b0780 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:07:03.760 NEW_FUNC[2/2]: 0x1192ec0 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1597 00:07:03.760 #38 NEW cov: 12109 ft: 15220 corp: 22/653b lim: 35 exec/s: 38 rss: 72Mb L: 27/35 MS: 1 PersAutoDict- DE: "\377\377\377\377\001,.\015"- 00:07:03.760 [2024-05-15 05:31:53.646104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.646131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.646272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.646290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.646418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.646436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.760 #39 NEW cov: 12109 ft: 15225 corp: 23/680b lim: 35 exec/s: 39 rss: 72Mb L: 27/35 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:07:03.760 [2024-05-15 05:31:53.685239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.685267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.760 #40 NEW cov: 12109 ft: 15238 corp: 24/691b lim: 35 exec/s: 40 rss: 72Mb L: 11/35 MS: 1 ShuffleBytes- 00:07:03.760 [2024-05-15 05:31:53.736193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.736219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.736362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.736385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.736525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.736542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.736686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.736703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.760 #41 NEW cov: 12109 ft: 15269 corp: 25/721b lim: 35 exec/s: 41 rss: 72Mb L: 30/35 MS: 1 ChangeBinInt- 00:07:03.760 [2024-05-15 05:31:53.776671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.776698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.776835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.776859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.776996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.777011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.760 [2024-05-15 05:31:53.777146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.760 [2024-05-15 05:31:53.777164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.018 #42 NEW cov: 12109 ft: 15271 corp: 26/751b lim: 35 exec/s: 42 rss: 72Mb L: 30/35 MS: 1 CrossOver- 00:07:04.018 [2024-05-15 05:31:53.827045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.827073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.827211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.827233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.827365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.827384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.827527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.827544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.827665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.827681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.018 #43 NEW cov: 12109 ft: 15284 corp: 27/786b lim: 35 exec/s: 43 rss: 72Mb L: 35/35 MS: 1 CrossOver- 00:07:04.018 [2024-05-15 05:31:53.876967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.876996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.877138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.877159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.877287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.877304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.877439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000071 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.877457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.877593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.877610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.018 #44 NEW cov: 12109 ft: 15293 corp: 28/821b lim: 35 exec/s: 44 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:04.018 [2024-05-15 05:31:53.927306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.927331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.927471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.927486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.927614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.927631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.927754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.927769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.927897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.927914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.018 #45 NEW cov: 12109 ft: 15332 corp: 29/856b lim: 35 exec/s: 45 rss: 72Mb L: 35/35 MS: 1 CMP- DE: "\377\377~\371\000\032\372\245"- 00:07:04.018 [2024-05-15 05:31:53.967030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.967055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.967185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.967203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.967329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.967345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.967480] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.967496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:53.967608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:53.967624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.018 #46 NEW cov: 12109 ft: 15345 corp: 30/891b lim: 35 exec/s: 46 rss: 72Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:04.018 [2024-05-15 05:31:54.007644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:54.007670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:54.007814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:54.007829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:54.007965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:54.007985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:54.008132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:54.008154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.018 [2024-05-15 05:31:54.008302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.018 [2024-05-15 05:31:54.008326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.018 #47 NEW cov: 12109 ft: 15372 corp: 31/926b lim: 35 exec/s: 47 rss: 72Mb L: 35/35 MS: 1 ChangeBit- 00:07:04.277 [2024-05-15 05:31:54.057718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.057746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.057871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.057890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.058025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.058046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.058174] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.058189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.058318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.058334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.277 #48 NEW cov: 12109 ft: 15402 corp: 32/961b lim: 35 exec/s: 48 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:04.277 [2024-05-15 05:31:54.097612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.097639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.097760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.097780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.097905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.097921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.098047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.098065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.098194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.098211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.277 #49 NEW cov: 12109 ft: 15465 corp: 33/996b lim: 35 exec/s: 49 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:04.277 [2024-05-15 05:31:54.137976] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.138003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.138135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.138151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.138282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.138300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.138425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.138443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.138574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.138591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.277 #50 NEW cov: 12109 ft: 15478 corp: 34/1031b lim: 35 exec/s: 50 rss: 72Mb L: 35/35 MS: 1 ChangeBit- 00:07:04.277 [2024-05-15 05:31:54.187570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.187599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.187729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.187750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.277 [2024-05-15 05:31:54.187873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.277 [2024-05-15 05:31:54.187888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.277 #51 NEW cov: 12109 ft: 15506 corp: 35/1056b lim: 35 exec/s: 51 rss: 73Mb L: 25/35 MS: 1 EraseBytes- 00:07:04.277 [2024-05-15 05:31:54.237041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.237070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.278 [2024-05-15 05:31:54.237198] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.237216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.278 #52 NEW cov: 12109 ft: 15575 corp: 36/1074b lim: 35 exec/s: 52 rss: 73Mb L: 18/35 MS: 1 EraseBytes- 00:07:04.278 [2024-05-15 05:31:54.278404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.278440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.278 [2024-05-15 05:31:54.278562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.278587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.278 [2024-05-15 05:31:54.278713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.278734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.278 [2024-05-15 05:31:54.278878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.278903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.278 [2024-05-15 05:31:54.279052] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:0000005b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.278 [2024-05-15 05:31:54.279076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.537 #53 NEW cov: 12109 ft: 15611 corp: 37/1109b lim: 35 exec/s: 26 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:07:04.537 #53 DONE cov: 12109 ft: 15611 corp: 37/1109b lim: 35 exec/s: 26 rss: 73Mb 00:07:04.537 ###### Recommended dictionary. ###### 00:07:04.537 "\377\377\377\377\001,.\015" # Uses: 1 00:07:04.537 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:04.537 "\377\377~\371\000\032\372\245" # Uses: 0 00:07:04.537 ###### End of recommended dictionary. ###### 00:07:04.537 Done 53 runs in 2 second(s) 00:07:04.537 [2024-05-15 05:31:54.310065] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:04.537 05:31:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:04.537 [2024-05-15 05:31:54.476430] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:04.537 [2024-05-15 05:31:54.476501] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272886 ] 00:07:04.537 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.796 [2024-05-15 05:31:54.658897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.796 [2024-05-15 05:31:54.724180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.796 [2024-05-15 05:31:54.783063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.796 [2024-05-15 05:31:54.799018] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:04.797 [2024-05-15 05:31:54.799412] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:04.797 INFO: Running with entropic power schedule (0xFF, 100). 00:07:04.797 INFO: Seed: 137560871 00:07:05.055 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:05.055 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:05.055 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:05.055 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.055 #2 INITED exec/s: 0 rss: 63Mb 00:07:05.055 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.055 This may also happen if the target rejected all inputs we tried so far 00:07:05.314 NEW_FUNC[1/672]: 0x497330 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:05.314 NEW_FUNC[2/672]: 0x4b72b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:05.314 #7 NEW cov: 11674 ft: 11646 corp: 2/8b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 5 InsertRepeatedBytes-ShuffleBytes-CrossOver-ChangeBinInt-CrossOver- 00:07:05.314 [2024-05-15 05:31:55.174943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.314 [2024-05-15 05:31:55.174985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.314 [2024-05-15 05:31:55.175021] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.314 [2024-05-15 05:31:55.175037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.314 [2024-05-15 05:31:55.175068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.314 [2024-05-15 05:31:55.175083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.314 NEW_FUNC[1/14]: 0x1736630 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:07:05.314 NEW_FUNC[2/14]: 0x1736870 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:07:05.314 #12 NEW cov: 11933 ft: 12642 corp: 3/29b lim: 35 exec/s: 0 rss: 71Mb L: 21/21 MS: 5 CrossOver-EraseBytes-ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:05.314 [2024-05-15 05:31:55.235012] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.314 [2024-05-15 05:31:55.235046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.314 [2024-05-15 05:31:55.235089] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.314 [2024-05-15 05:31:55.235104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.314 [2024-05-15 05:31:55.235133] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.314 [2024-05-15 05:31:55.235147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.314 #18 NEW cov: 11939 ft: 12898 corp: 4/50b lim: 35 exec/s: 0 rss: 71Mb L: 21/21 MS: 1 ChangeBit- 00:07:05.573 #19 NEW cov: 12024 ft: 13150 corp: 5/57b lim: 35 exec/s: 0 rss: 71Mb L: 7/21 MS: 1 ChangeByte- 00:07:05.573 NEW_FUNC[1/1]: 0x4b1af0 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:07:05.573 #24 NEW cov: 12047 ft: 13342 corp: 6/70b lim: 35 exec/s: 0 rss: 71Mb L: 13/21 MS: 5 ChangeBit-CopyPart-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:05.573 #25 NEW cov: 12047 ft: 13497 corp: 7/79b lim: 35 exec/s: 0 rss: 71Mb L: 9/21 MS: 1 CopyPart- 00:07:05.573 #26 NEW cov: 12047 ft: 13596 corp: 8/90b lim: 35 exec/s: 0 rss: 71Mb L: 11/21 MS: 1 CMP- DE: "\017\000\000\000"- 00:07:05.573 #28 NEW cov: 12047 ft: 13615 corp: 9/98b lim: 35 exec/s: 0 rss: 71Mb L: 8/21 MS: 2 EraseBytes-CopyPart- 00:07:05.832 #29 NEW cov: 12047 ft: 13660 corp: 10/106b lim: 35 exec/s: 0 rss: 71Mb L: 8/21 MS: 1 ShuffleBytes- 00:07:05.832 [2024-05-15 05:31:55.666002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.832 [2024-05-15 05:31:55.666038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.832 #30 NEW cov: 12047 ft: 13818 corp: 11/113b lim: 35 exec/s: 0 rss: 71Mb L: 7/21 MS: 1 ChangeBit- 00:07:05.832 [2024-05-15 05:31:55.716202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.832 [2024-05-15 05:31:55.716233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.832 [2024-05-15 05:31:55.716266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.832 [2024-05-15 05:31:55.716281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.832 [2024-05-15 05:31:55.716310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.832 [2024-05-15 05:31:55.716326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.832 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:05.832 #31 NEW cov: 12064 ft: 13870 corp: 12/135b lim: 35 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 InsertByte- 00:07:05.832 [2024-05-15 05:31:55.786317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007d6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.832 [2024-05-15 05:31:55.786349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.832 #32 NEW cov: 12064 ft: 13890 corp: 13/142b lim: 35 exec/s: 32 rss: 72Mb L: 7/22 MS: 1 ChangeBinInt- 00:07:06.091 #33 NEW cov: 12064 ft: 13919 corp: 14/151b lim: 35 exec/s: 33 rss: 72Mb L: 9/22 MS: 1 InsertByte- 00:07:06.091 #34 NEW cov: 12064 ft: 13996 corp: 15/160b lim: 35 exec/s: 34 rss: 72Mb L: 9/22 MS: 1 InsertByte- 00:07:06.091 #35 NEW cov: 12064 ft: 14005 corp: 16/172b lim: 35 exec/s: 35 rss: 72Mb L: 12/22 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:06.091 [2024-05-15 05:31:56.026922] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.091 [2024-05-15 05:31:56.026953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.091 #36 NEW cov: 12064 ft: 14025 corp: 17/179b lim: 35 exec/s: 36 rss: 72Mb L: 7/22 MS: 1 ChangeBinInt- 00:07:06.350 #37 NEW cov: 12064 ft: 14049 corp: 18/188b lim: 35 exec/s: 37 rss: 72Mb L: 9/22 MS: 1 ShuffleBytes- 00:07:06.350 #38 NEW cov: 12064 ft: 14071 corp: 19/195b lim: 35 exec/s: 38 rss: 72Mb L: 7/22 MS: 1 CrossOver- 00:07:06.350 #39 NEW cov: 12064 ft: 14085 corp: 20/202b lim: 35 exec/s: 39 rss: 72Mb L: 7/22 MS: 1 ChangeByte- 00:07:06.350 [2024-05-15 05:31:56.247637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.350 [2024-05-15 05:31:56.247667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.350 [2024-05-15 05:31:56.247700] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.350 [2024-05-15 05:31:56.247714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.350 [2024-05-15 05:31:56.247744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.350 [2024-05-15 05:31:56.247758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.350 #40 NEW cov: 12064 ft: 14105 corp: 21/223b lim: 35 exec/s: 40 rss: 72Mb L: 21/22 MS: 1 ChangeByte- 00:07:06.350 #41 NEW cov: 12064 ft: 14120 corp: 22/232b lim: 35 exec/s: 41 rss: 72Mb L: 9/22 MS: 1 ChangeBit- 00:07:06.350 [2024-05-15 05:31:56.367985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.350 [2024-05-15 05:31:56.368017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.609 #42 NEW cov: 12064 ft: 14209 corp: 23/249b lim: 35 exec/s: 42 rss: 72Mb L: 17/22 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:06.609 [2024-05-15 05:31:56.438116] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.609 [2024-05-15 05:31:56.438145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.609 #43 NEW cov: 12064 ft: 14323 corp: 24/266b lim: 35 exec/s: 43 rss: 73Mb L: 17/22 MS: 1 ShuffleBytes- 00:07:06.609 [2024-05-15 05:31:56.508216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.609 [2024-05-15 05:31:56.508247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.609 #44 NEW cov: 12064 ft: 14357 corp: 25/273b lim: 35 exec/s: 44 rss: 73Mb L: 7/22 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:07:06.609 #45 NEW cov: 12064 ft: 14512 corp: 26/281b lim: 35 exec/s: 45 rss: 73Mb L: 8/22 MS: 1 ChangeBinInt- 00:07:06.869 #46 NEW cov: 12064 ft: 14552 corp: 27/288b lim: 35 exec/s: 46 rss: 73Mb L: 7/22 MS: 1 ShuffleBytes- 00:07:06.869 #47 NEW cov: 12064 ft: 14565 corp: 28/298b lim: 35 exec/s: 47 rss: 73Mb L: 10/22 MS: 1 InsertByte- 00:07:06.869 #48 NEW cov: 12064 ft: 14587 corp: 29/309b lim: 35 exec/s: 48 rss: 73Mb L: 11/22 MS: 1 CopyPart- 00:07:06.869 [2024-05-15 05:31:56.739531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.869 [2024-05-15 05:31:56.739557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.869 [2024-05-15 05:31:56.739614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.869 [2024-05-15 05:31:56.739627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.869 #49 NEW cov: 12071 ft: 14708 corp: 30/323b lim: 35 exec/s: 49 rss: 73Mb L: 14/22 MS: 1 CrossOver- 00:07:06.869 [2024-05-15 05:31:56.779641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000073a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.869 [2024-05-15 05:31:56.779666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.869 [2024-05-15 05:31:56.779727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.869 [2024-05-15 05:31:56.779741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.869 #50 NEW cov: 12071 ft: 14723 corp: 31/338b lim: 35 exec/s: 50 rss: 73Mb L: 15/22 MS: 1 EraseBytes- 00:07:06.869 #51 NEW cov: 12071 ft: 14734 corp: 32/347b lim: 35 exec/s: 25 rss: 73Mb L: 9/22 MS: 1 EraseBytes- 00:07:06.869 #51 DONE cov: 12071 ft: 14734 corp: 32/347b lim: 35 exec/s: 25 rss: 73Mb 00:07:06.869 ###### Recommended dictionary. ###### 00:07:06.869 "\017\000\000\000" # Uses: 3 00:07:06.869 ###### End of recommended dictionary. ###### 00:07:06.869 Done 51 runs in 2 second(s) 00:07:06.869 [2024-05-15 05:31:56.858385] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:07.128 05:31:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:07.128 [2024-05-15 05:31:57.024431] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:07.128 [2024-05-15 05:31:57.024503] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273289 ] 00:07:07.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.388 [2024-05-15 05:31:57.208585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.388 [2024-05-15 05:31:57.274500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.388 [2024-05-15 05:31:57.333857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.388 [2024-05-15 05:31:57.349808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:07.388 [2024-05-15 05:31:57.350231] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:07.388 INFO: Running with entropic power schedule (0xFF, 100). 00:07:07.388 INFO: Seed: 2689553826 00:07:07.388 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:07.388 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:07.388 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:07.388 INFO: A corpus is not provided, starting from an empty corpus 00:07:07.388 #2 INITED exec/s: 0 rss: 64Mb 00:07:07.388 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:07.388 This may also happen if the target rejected all inputs we tried so far 00:07:07.647 [2024-05-15 05:31:57.415333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.647 [2024-05-15 05:31:57.415364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.906 NEW_FUNC[1/685]: 0x4987e0 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:07.906 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:07.906 #14 NEW cov: 11886 ft: 11887 corp: 2/23b lim: 105 exec/s: 0 rss: 70Mb L: 22/22 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:07.906 [2024-05-15 05:31:57.746500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.906 [2024-05-15 05:31:57.746560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.906 [2024-05-15 05:31:57.746643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.906 [2024-05-15 05:31:57.746672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.907 [2024-05-15 05:31:57.746756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271376359 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.746785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.907 NEW_FUNC[1/1]: 0x1d2d7f0 in _get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:332 00:07:07.907 #25 NEW cov: 12023 ft: 12936 corp: 3/87b lim: 105 exec/s: 0 rss: 70Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:07:07.907 [2024-05-15 05:31:57.806194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.806223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.907 #30 NEW cov: 12029 ft: 13283 corp: 4/110b lim: 105 exec/s: 0 rss: 70Mb L: 23/64 MS: 5 CopyPart-InsertByte-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:07:07.907 [2024-05-15 05:31:57.846530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.846560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.907 [2024-05-15 05:31:57.846597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.846611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.907 [2024-05-15 05:31:57.846668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271376221 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.846684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.907 #31 NEW cov: 12114 ft: 13555 corp: 5/175b lim: 105 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 CrossOver- 00:07:07.907 [2024-05-15 05:31:57.896725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.896754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.907 [2024-05-15 05:31:57.896798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.896814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.907 [2024-05-15 05:31:57.896868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6766631946037321053 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.907 [2024-05-15 05:31:57.896884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.907 #32 NEW cov: 12114 ft: 13692 corp: 6/241b lim: 105 exec/s: 0 rss: 71Mb L: 66/66 MS: 1 InsertByte- 00:07:08.166 [2024-05-15 05:31:57.946860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:57.946888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:57.946922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:57.946937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:57.946992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:57.947008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.166 #34 NEW cov: 12114 ft: 13743 corp: 7/305b lim: 105 exec/s: 0 rss: 71Mb L: 64/66 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:08.166 [2024-05-15 05:31:57.986924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:57.986952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:57.986999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:57.987014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:57.987069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4277830201961700701 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:57.987084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.166 #35 NEW cov: 12114 ft: 13914 corp: 8/372b lim: 105 exec/s: 0 rss: 71Mb L: 67/67 MS: 1 InsertByte- 00:07:08.166 [2024-05-15 05:31:58.037188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.037217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:58.037264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.037279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:58.037332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.037348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:58.037408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.037424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.166 #36 NEW cov: 12114 ft: 14466 corp: 9/476b lim: 105 exec/s: 0 rss: 71Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:07:08.166 [2024-05-15 05:31:58.076951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.076982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.166 #40 NEW cov: 12114 ft: 14506 corp: 10/503b lim: 105 exec/s: 0 rss: 71Mb L: 27/104 MS: 4 EraseBytes-ShuffleBytes-ChangeBit-CMP- DE: "\377\377\377\377\377\377\377G"- 00:07:08.166 [2024-05-15 05:31:58.127220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.127248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:58.127280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:24040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.127295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.166 #41 NEW cov: 12114 ft: 14799 corp: 11/554b lim: 105 exec/s: 0 rss: 71Mb L: 51/104 MS: 1 EraseBytes- 00:07:08.166 [2024-05-15 05:31:58.167339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2816 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.167367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.166 [2024-05-15 05:31:58.167405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636072934497629 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.166 [2024-05-15 05:31:58.167422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.426 #42 NEW cov: 12114 ft: 14823 corp: 12/613b lim: 105 exec/s: 0 rss: 72Mb L: 59/104 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:07:08.426 [2024-05-15 05:31:58.217390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.217419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.426 #43 NEW cov: 12114 ft: 14868 corp: 13/640b lim: 105 exec/s: 0 rss: 72Mb L: 27/104 MS: 1 ChangeBinInt- 00:07:08.426 [2024-05-15 05:31:58.267751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.267779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.267814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636623696944477 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.267829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.267885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271376359 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.267900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.426 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:08.426 #44 NEW cov: 12137 ft: 14900 corp: 14/704b lim: 105 exec/s: 0 rss: 72Mb L: 64/104 MS: 1 ChangeBit- 00:07:08.426 [2024-05-15 05:31:58.307574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.307603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.426 #45 NEW cov: 12137 ft: 14957 corp: 15/727b lim: 105 exec/s: 0 rss: 72Mb L: 23/104 MS: 1 ChangeBit- 00:07:08.426 [2024-05-15 05:31:58.347735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.347764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.426 #46 NEW cov: 12137 ft: 15057 corp: 16/750b lim: 105 exec/s: 0 rss: 72Mb L: 23/104 MS: 1 ChangeByte- 00:07:08.426 [2024-05-15 05:31:58.398126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.398156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.398194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579925595711463 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.398210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.398264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271411687 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.398280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.426 #47 NEW cov: 12137 ft: 15098 corp: 17/813b lim: 105 exec/s: 47 rss: 72Mb L: 63/104 MS: 1 CopyPart- 00:07:08.426 [2024-05-15 05:31:58.438410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.438439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.438487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.438502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.438556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.438572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.426 [2024-05-15 05:31:58.438628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.426 [2024-05-15 05:31:58.438643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.686 #48 NEW cov: 12137 ft: 15156 corp: 18/913b lim: 105 exec/s: 48 rss: 72Mb L: 100/104 MS: 1 InsertRepeatedBytes- 00:07:08.686 [2024-05-15 05:31:58.478243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17868022691139155959 len:63480 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.686 [2024-05-15 05:31:58.478272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.686 [2024-05-15 05:31:58.478306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17868022691004938231 len:63480 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.686 [2024-05-15 05:31:58.478321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.686 #51 NEW cov: 12137 ft: 15182 corp: 19/959b lim: 105 exec/s: 51 rss: 72Mb L: 46/104 MS: 3 PersAutoDict-PersAutoDict-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\377G"-"\377\377\377\377\377\377\377G"- 00:07:08.686 [2024-05-15 05:31:58.518358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.518395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.518445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:24040 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.518460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.687 #52 NEW cov: 12137 ft: 15216 corp: 20/1012b lim: 105 exec/s: 52 rss: 72Mb L: 53/104 MS: 1 CMP- DE: "\377~"- 00:07:08.687 [2024-05-15 05:31:58.558595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.558623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.558670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636623696944477 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.558686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.558740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271376359 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.558756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.687 #53 NEW cov: 12137 ft: 15245 corp: 21/1076b lim: 105 exec/s: 53 rss: 72Mb L: 64/104 MS: 1 ChangeByte- 00:07:08.687 [2024-05-15 05:31:58.608736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.608763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.608810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.608825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.608880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6766631946037321053 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.608895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.687 #54 NEW cov: 12137 ft: 15257 corp: 22/1142b lim: 105 exec/s: 54 rss: 72Mb L: 66/104 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:07:08.687 [2024-05-15 05:31:58.648622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.648651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.687 #55 NEW cov: 12137 ft: 15270 corp: 23/1169b lim: 105 exec/s: 55 rss: 72Mb L: 27/104 MS: 1 InsertRepeatedBytes- 00:07:08.687 [2024-05-15 05:31:58.688960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:144115191953817600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.688988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.689028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727544814476025181 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.689043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.687 [2024-05-15 05:31:58.689103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.687 [2024-05-15 05:31:58.689120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.946 #56 NEW cov: 12137 ft: 15294 corp: 24/1247b lim: 105 exec/s: 56 rss: 72Mb L: 78/104 MS: 1 CrossOver- 00:07:08.946 [2024-05-15 05:31:58.739071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465831 len:23819 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.739099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.946 [2024-05-15 05:31:58.739131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.739146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.946 [2024-05-15 05:31:58.739203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271376221 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.739220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.946 #57 NEW cov: 12137 ft: 15320 corp: 25/1312b lim: 105 exec/s: 57 rss: 72Mb L: 65/104 MS: 1 CrossOver- 00:07:08.946 [2024-05-15 05:31:58.779183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:144115191953817600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.779212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.946 [2024-05-15 05:31:58.779255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727544814476025181 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.779269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.946 [2024-05-15 05:31:58.779327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.779344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.946 #58 NEW cov: 12137 ft: 15333 corp: 26/1390b lim: 105 exec/s: 58 rss: 73Mb L: 78/104 MS: 1 ChangeBinInt- 00:07:08.946 [2024-05-15 05:31:58.829091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:624361472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.829120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.946 #59 NEW cov: 12137 ft: 15344 corp: 27/1417b lim: 105 exec/s: 59 rss: 73Mb L: 27/104 MS: 1 ChangeByte- 00:07:08.946 [2024-05-15 05:31:58.879248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:620756992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.879277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.946 #60 NEW cov: 12137 ft: 15392 corp: 28/1444b lim: 105 exec/s: 60 rss: 73Mb L: 27/104 MS: 1 CrossOver- 00:07:08.946 [2024-05-15 05:31:58.929407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.946 [2024-05-15 05:31:58.929437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.947 #61 NEW cov: 12137 ft: 15398 corp: 29/1481b lim: 105 exec/s: 61 rss: 73Mb L: 37/104 MS: 1 CrossOver- 00:07:09.206 [2024-05-15 05:31:58.969702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17868022691139155959 len:63480 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:58.969731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:58.969765] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17868022691004938231 len:63480 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:58.969782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.206 #62 NEW cov: 12137 ft: 15428 corp: 30/1527b lim: 105 exec/s: 62 rss: 73Mb L: 46/104 MS: 1 ShuffleBytes- 00:07:09.206 [2024-05-15 05:31:59.019827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.019856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.019896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:16710579923271376221 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.019911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.206 #63 NEW cov: 12137 ft: 15442 corp: 31/1571b lim: 105 exec/s: 63 rss: 73Mb L: 44/104 MS: 1 EraseBytes- 00:07:09.206 [2024-05-15 05:31:59.069908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.069937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.069969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.069984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.206 #64 NEW cov: 12137 ft: 15473 corp: 32/1627b lim: 105 exec/s: 64 rss: 73Mb L: 56/104 MS: 1 EraseBytes- 00:07:09.206 [2024-05-15 05:31:59.120225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:144115191953817600 len:209 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.120253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.120288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727544814476025181 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.120302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.120358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.120374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.206 #65 NEW cov: 12137 ft: 15486 corp: 33/1705b lim: 105 exec/s: 65 rss: 73Mb L: 78/104 MS: 1 ChangeByte- 00:07:09.206 [2024-05-15 05:31:59.160402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.160430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.160485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.160500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.160559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:16710579923271376221 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.160575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.160629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6727636073941130589 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.160644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.206 #66 NEW cov: 12137 ft: 15499 corp: 34/1798b lim: 105 exec/s: 66 rss: 73Mb L: 93/104 MS: 1 CopyPart- 00:07:09.206 [2024-05-15 05:31:59.200398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.200426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.200474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.206 [2024-05-15 05:31:59.200490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.206 [2024-05-15 05:31:59.200546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4277830201961700701 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.207 [2024-05-15 05:31:59.200563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.207 #67 NEW cov: 12137 ft: 15512 corp: 35/1865b lim: 105 exec/s: 67 rss: 73Mb L: 67/104 MS: 1 ChangeByte- 00:07:09.466 [2024-05-15 05:31:59.240323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.240352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.466 #68 NEW cov: 12137 ft: 15528 corp: 36/1888b lim: 105 exec/s: 68 rss: 73Mb L: 23/104 MS: 1 InsertByte- 00:07:09.466 [2024-05-15 05:31:59.280702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:154529766092111872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.280730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.466 [2024-05-15 05:31:59.280777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:11394548236288 len:23809 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.280793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.466 [2024-05-15 05:31:59.280848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.280865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.466 #69 NEW cov: 12137 ft: 15539 corp: 37/1966b lim: 105 exec/s: 69 rss: 73Mb L: 78/104 MS: 1 CrossOver- 00:07:09.466 [2024-05-15 05:31:59.330544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:16710579925595711463 len:59368 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.330573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.466 #70 NEW cov: 12137 ft: 15559 corp: 38/1988b lim: 105 exec/s: 70 rss: 74Mb L: 22/104 MS: 1 ShuffleBytes- 00:07:09.466 [2024-05-15 05:31:59.370969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.370997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.466 [2024-05-15 05:31:59.371046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.371062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.466 [2024-05-15 05:31:59.371117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6766631946037321053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.371131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.466 [2024-05-15 05:31:59.371187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.371203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.466 #71 NEW cov: 12137 ft: 15570 corp: 39/2087b lim: 105 exec/s: 71 rss: 74Mb L: 99/104 MS: 1 InsertRepeatedBytes- 00:07:09.466 [2024-05-15 05:31:59.411022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6727636076265465693 len:2654 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.466 [2024-05-15 05:31:59.411050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.467 [2024-05-15 05:31:59.411098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6727635816243092829 len:23902 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.467 [2024-05-15 05:31:59.411114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.467 [2024-05-15 05:31:59.411169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:1323468846406655325 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:09.467 [2024-05-15 05:31:59.411184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.467 #72 NEW cov: 12137 ft: 15583 corp: 40/2154b lim: 105 exec/s: 36 rss: 74Mb L: 67/104 MS: 1 InsertByte- 00:07:09.467 #72 DONE cov: 12137 ft: 15583 corp: 40/2154b lim: 105 exec/s: 36 rss: 74Mb 00:07:09.467 ###### Recommended dictionary. ###### 00:07:09.467 "\377\377\377\377\377\377\377G" # Uses: 4 00:07:09.467 "\377~" # Uses: 0 00:07:09.467 ###### End of recommended dictionary. ###### 00:07:09.467 Done 72 runs in 2 second(s) 00:07:09.467 [2024-05-15 05:31:59.441439] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:09.726 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.727 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:09.727 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:09.727 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:09.727 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:09.727 05:31:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:09.727 [2024-05-15 05:31:59.610867] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:09.727 [2024-05-15 05:31:59.610966] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273821 ] 00:07:09.727 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.986 [2024-05-15 05:31:59.794386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.986 [2024-05-15 05:31:59.860052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.986 [2024-05-15 05:31:59.918934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.986 [2024-05-15 05:31:59.934886] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:09.986 [2024-05-15 05:31:59.935307] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:09.986 INFO: Running with entropic power schedule (0xFF, 100). 00:07:09.986 INFO: Seed: 977581120 00:07:09.986 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:09.986 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:09.986 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:09.986 INFO: A corpus is not provided, starting from an empty corpus 00:07:09.986 #2 INITED exec/s: 0 rss: 63Mb 00:07:09.986 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:09.986 This may also happen if the target rejected all inputs we tried so far 00:07:09.986 [2024-05-15 05:31:59.983818] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.986 [2024-05-15 05:31:59.983850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.505 NEW_FUNC[1/687]: 0x49bb60 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:10.505 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.505 #17 NEW cov: 11905 ft: 11915 corp: 2/33b lim: 120 exec/s: 0 rss: 70Mb L: 32/32 MS: 5 ChangeByte-CopyPart-EraseBytes-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:10.505 [2024-05-15 05:32:00.325458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.505 [2024-05-15 05:32:00.325516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.505 #28 NEW cov: 12044 ft: 12750 corp: 3/65b lim: 120 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBit- 00:07:10.505 [2024-05-15 05:32:00.375396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.505 [2024-05-15 05:32:00.375428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.505 #29 NEW cov: 12050 ft: 12965 corp: 4/97b lim: 120 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeByte- 00:07:10.505 [2024-05-15 05:32:00.425572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.505 [2024-05-15 05:32:00.425598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.505 #30 NEW cov: 12135 ft: 13209 corp: 5/129b lim: 120 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:10.505 [2024-05-15 05:32:00.475749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.505 [2024-05-15 05:32:00.475775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.505 #31 NEW cov: 12135 ft: 13289 corp: 6/162b lim: 120 exec/s: 0 rss: 70Mb L: 33/33 MS: 1 CrossOver- 00:07:10.505 [2024-05-15 05:32:00.525907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125900443713536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.505 [2024-05-15 05:32:00.525933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.765 #32 NEW cov: 12135 ft: 13390 corp: 7/194b lim: 120 exec/s: 0 rss: 70Mb L: 32/33 MS: 1 ChangeBinInt- 00:07:10.765 [2024-05-15 05:32:00.565951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125900460031744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.565977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.765 #33 NEW cov: 12135 ft: 13461 corp: 8/226b lim: 120 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 ChangeBinInt- 00:07:10.765 [2024-05-15 05:32:00.616160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125900460031744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.616186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.765 #34 NEW cov: 12135 ft: 13480 corp: 9/258b lim: 120 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 CopyPart- 00:07:10.765 [2024-05-15 05:32:00.666265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.666291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.765 #35 NEW cov: 12135 ft: 13579 corp: 10/290b lim: 120 exec/s: 0 rss: 71Mb L: 32/33 MS: 1 ChangeBit- 00:07:10.765 [2024-05-15 05:32:00.706586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.706621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.765 [2024-05-15 05:32:00.706725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.706746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.765 #41 NEW cov: 12135 ft: 14415 corp: 11/348b lim: 120 exec/s: 0 rss: 71Mb L: 58/58 MS: 1 CrossOver- 00:07:10.765 [2024-05-15 05:32:00.746721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.746754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.765 [2024-05-15 05:32:00.746873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15046526358548173008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.765 [2024-05-15 05:32:00.746896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.765 #42 NEW cov: 12135 ft: 14439 corp: 12/401b lim: 120 exec/s: 0 rss: 71Mb L: 53/58 MS: 1 InsertRepeatedBytes- 00:07:11.025 [2024-05-15 05:32:00.786635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899910971392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.786662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.025 #48 NEW cov: 12135 ft: 14445 corp: 13/433b lim: 120 exec/s: 0 rss: 71Mb L: 32/58 MS: 1 ChangeByte- 00:07:11.025 [2024-05-15 05:32:00.826906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125908496777216 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.826941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.025 [2024-05-15 05:32:00.827045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15046526358548173008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.827066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.025 #49 NEW cov: 12135 ft: 14497 corp: 14/486b lim: 120 exec/s: 0 rss: 71Mb L: 53/58 MS: 1 ChangeBit- 00:07:11.025 [2024-05-15 05:32:00.876825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.876851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.025 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:11.025 #50 NEW cov: 12158 ft: 14573 corp: 15/518b lim: 120 exec/s: 0 rss: 71Mb L: 32/58 MS: 1 ChangeBit- 00:07:11.025 [2024-05-15 05:32:00.916973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.916998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.025 #51 NEW cov: 12158 ft: 14587 corp: 16/550b lim: 120 exec/s: 0 rss: 71Mb L: 32/58 MS: 1 ShuffleBytes- 00:07:11.025 [2024-05-15 05:32:00.957096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.957121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.025 #52 NEW cov: 12158 ft: 14598 corp: 17/581b lim: 120 exec/s: 52 rss: 71Mb L: 31/58 MS: 1 EraseBytes- 00:07:11.025 [2024-05-15 05:32:00.997162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:00.997192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.025 #53 NEW cov: 12158 ft: 14628 corp: 18/613b lim: 120 exec/s: 53 rss: 71Mb L: 32/58 MS: 1 ShuffleBytes- 00:07:11.025 [2024-05-15 05:32:01.037354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.025 [2024-05-15 05:32:01.037386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.284 #54 NEW cov: 12158 ft: 14717 corp: 19/644b lim: 120 exec/s: 54 rss: 71Mb L: 31/58 MS: 1 ShuffleBytes- 00:07:11.284 [2024-05-15 05:32:01.087406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125904201809920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.087439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.284 #55 NEW cov: 12158 ft: 14788 corp: 20/677b lim: 120 exec/s: 55 rss: 71Mb L: 33/58 MS: 1 ChangeBit- 00:07:11.284 [2024-05-15 05:32:01.137596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4398868594688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.137626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.284 #60 NEW cov: 12158 ft: 14836 corp: 21/711b lim: 120 exec/s: 60 rss: 71Mb L: 34/58 MS: 5 ShuffleBytes-ChangeBinInt-InsertByte-ChangeBit-CrossOver- 00:07:11.284 [2024-05-15 05:32:01.178459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.178491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.284 [2024-05-15 05:32:01.178563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.178584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.284 [2024-05-15 05:32:01.178698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.178722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.284 [2024-05-15 05:32:01.178837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.178861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.284 #61 NEW cov: 12158 ft: 15267 corp: 22/816b lim: 120 exec/s: 61 rss: 71Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:07:11.284 [2024-05-15 05:32:01.218108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.218138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.284 [2024-05-15 05:32:01.218230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15046526358548173008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.218254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.284 #62 NEW cov: 12158 ft: 15298 corp: 23/869b lim: 120 exec/s: 62 rss: 71Mb L: 53/105 MS: 1 ChangeBit- 00:07:11.284 [2024-05-15 05:32:01.257997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.284 [2024-05-15 05:32:01.258028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.284 #63 NEW cov: 12158 ft: 15312 corp: 24/914b lim: 120 exec/s: 63 rss: 72Mb L: 45/105 MS: 1 CrossOver- 00:07:11.543 [2024-05-15 05:32:01.308188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.308221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.544 [2024-05-15 05:32:01.308350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:288230376151711744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.308385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.544 #64 NEW cov: 12158 ft: 15338 corp: 25/969b lim: 120 exec/s: 64 rss: 72Mb L: 55/105 MS: 1 InsertRepeatedBytes- 00:07:11.544 [2024-05-15 05:32:01.358218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.358249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.544 #65 NEW cov: 12158 ft: 15349 corp: 26/1001b lim: 120 exec/s: 65 rss: 72Mb L: 32/105 MS: 1 InsertByte- 00:07:11.544 [2024-05-15 05:32:01.408389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4398868594688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.408417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.544 #66 NEW cov: 12158 ft: 15362 corp: 27/1035b lim: 120 exec/s: 66 rss: 72Mb L: 34/105 MS: 1 ChangeByte- 00:07:11.544 [2024-05-15 05:32:01.458811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125908496777216 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.458844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.544 [2024-05-15 05:32:01.458936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15046543950734217424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.458959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.544 #67 NEW cov: 12158 ft: 15377 corp: 28/1088b lim: 120 exec/s: 67 rss: 72Mb L: 53/105 MS: 1 ChangeBit- 00:07:11.544 [2024-05-15 05:32:01.508704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1126793260040192 len:209 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.508736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.544 #68 NEW cov: 12158 ft: 15453 corp: 29/1120b lim: 120 exec/s: 68 rss: 72Mb L: 32/105 MS: 1 ShuffleBytes- 00:07:11.544 [2024-05-15 05:32:01.559351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.559386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.544 [2024-05-15 05:32:01.559440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8246779703540740722 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.559462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.544 [2024-05-15 05:32:01.559579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.544 [2024-05-15 05:32:01.559603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.847 #69 NEW cov: 12158 ft: 15739 corp: 30/1199b lim: 120 exec/s: 69 rss: 72Mb L: 79/105 MS: 1 InsertRepeatedBytes- 00:07:11.847 [2024-05-15 05:32:01.599003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.847 [2024-05-15 05:32:01.599033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.847 #70 NEW cov: 12158 ft: 15791 corp: 31/1227b lim: 120 exec/s: 70 rss: 72Mb L: 28/105 MS: 1 EraseBytes- 00:07:11.847 [2024-05-15 05:32:01.639075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:22617 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.847 [2024-05-15 05:32:01.639102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.847 #71 NEW cov: 12158 ft: 15796 corp: 32/1269b lim: 120 exec/s: 71 rss: 72Mb L: 42/105 MS: 1 InsertRepeatedBytes- 00:07:11.847 [2024-05-15 05:32:01.679998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.847 [2024-05-15 05:32:01.680028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.848 [2024-05-15 05:32:01.680085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.680104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.848 [2024-05-15 05:32:01.680235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744072042119167 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.680255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.848 [2024-05-15 05:32:01.680398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11285066962739960988 len:40093 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.680420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.848 #77 NEW cov: 12158 ft: 15799 corp: 33/1388b lim: 120 exec/s: 77 rss: 72Mb L: 119/119 MS: 1 InsertRepeatedBytes- 00:07:11.848 [2024-05-15 05:32:01.729591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.729626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.848 [2024-05-15 05:32:01.729741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8246779703540740722 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.729763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.848 [2024-05-15 05:32:01.729881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:8246779703540740722 len:29299 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.729902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.848 #78 NEW cov: 12158 ft: 15811 corp: 34/1467b lim: 120 exec/s: 78 rss: 72Mb L: 79/119 MS: 1 ShuffleBytes- 00:07:11.848 [2024-05-15 05:32:01.779538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11821949021847552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.779571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.848 #79 NEW cov: 12158 ft: 15827 corp: 35/1500b lim: 120 exec/s: 79 rss: 72Mb L: 33/119 MS: 1 InsertByte- 00:07:11.848 [2024-05-15 05:32:01.819810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72340172838076673 len:258 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.819845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.848 [2024-05-15 05:32:01.819932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72340172821168385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.848 [2024-05-15 05:32:01.819956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.848 #85 NEW cov: 12158 ft: 15832 corp: 36/1563b lim: 120 exec/s: 85 rss: 72Mb L: 63/119 MS: 1 InsertRepeatedBytes- 00:07:12.128 [2024-05-15 05:32:01.869774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:73183493948899328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.128 [2024-05-15 05:32:01.869802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.128 #86 NEW cov: 12158 ft: 15846 corp: 37/1595b lim: 120 exec/s: 86 rss: 73Mb L: 32/119 MS: 1 ChangeBinInt- 00:07:12.128 [2024-05-15 05:32:01.920151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:53457 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.128 [2024-05-15 05:32:01.920186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.128 [2024-05-15 05:32:01.920301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:15046755056960262352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.128 [2024-05-15 05:32:01.920321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.128 #87 NEW cov: 12158 ft: 15847 corp: 38/1649b lim: 120 exec/s: 87 rss: 73Mb L: 54/119 MS: 1 InsertByte- 00:07:12.128 [2024-05-15 05:32:01.959987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.128 [2024-05-15 05:32:01.960013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.128 #88 NEW cov: 12158 ft: 15855 corp: 39/1679b lim: 120 exec/s: 44 rss: 73Mb L: 30/119 MS: 1 EraseBytes- 00:07:12.128 #88 DONE cov: 12158 ft: 15855 corp: 39/1679b lim: 120 exec/s: 44 rss: 73Mb 00:07:12.128 Done 88 runs in 2 second(s) 00:07:12.128 [2024-05-15 05:32:01.990565] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:12.128 05:32:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:12.387 [2024-05-15 05:32:02.160796] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:12.387 [2024-05-15 05:32:02.160863] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274232 ] 00:07:12.387 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.387 [2024-05-15 05:32:02.347911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.647 [2024-05-15 05:32:02.413848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.647 [2024-05-15 05:32:02.472970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.647 [2024-05-15 05:32:02.488924] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:12.647 [2024-05-15 05:32:02.489320] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:12.647 INFO: Running with entropic power schedule (0xFF, 100). 00:07:12.647 INFO: Seed: 3531581577 00:07:12.647 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:12.647 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:12.647 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:12.647 INFO: A corpus is not provided, starting from an empty corpus 00:07:12.647 #2 INITED exec/s: 0 rss: 64Mb 00:07:12.647 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:12.647 This may also happen if the target rejected all inputs we tried so far 00:07:12.647 [2024-05-15 05:32:02.537936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.647 [2024-05-15 05:32:02.537966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.647 [2024-05-15 05:32:02.537996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.647 [2024-05-15 05:32:02.538011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.647 [2024-05-15 05:32:02.538066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.647 [2024-05-15 05:32:02.538081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.907 NEW_FUNC[1/684]: 0x49f450 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:12.907 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:12.907 #4 NEW cov: 11856 ft: 11857 corp: 2/61b lim: 100 exec/s: 0 rss: 70Mb L: 60/60 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:12.907 [2024-05-15 05:32:02.848706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.907 [2024-05-15 05:32:02.848742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.907 [2024-05-15 05:32:02.848797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.907 [2024-05-15 05:32:02.848813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.907 [2024-05-15 05:32:02.848868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.907 [2024-05-15 05:32:02.848882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.908 NEW_FUNC[1/1]: 0xef92c0 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:290 00:07:12.908 #10 NEW cov: 11987 ft: 12347 corp: 3/127b lim: 100 exec/s: 0 rss: 70Mb L: 66/66 MS: 1 CrossOver- 00:07:12.908 [2024-05-15 05:32:02.898725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:12.908 [2024-05-15 05:32:02.898753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.908 [2024-05-15 05:32:02.898788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:12.908 [2024-05-15 05:32:02.898803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.908 [2024-05-15 05:32:02.898855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:12.908 [2024-05-15 05:32:02.898870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.908 #13 NEW cov: 11993 ft: 12632 corp: 4/194b lim: 100 exec/s: 0 rss: 70Mb L: 67/67 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:07:13.167 [2024-05-15 05:32:02.938706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.167 [2024-05-15 05:32:02.938736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.167 [2024-05-15 05:32:02.938783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.167 [2024-05-15 05:32:02.938799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.167 #14 NEW cov: 12078 ft: 13287 corp: 5/239b lim: 100 exec/s: 0 rss: 70Mb L: 45/67 MS: 1 InsertRepeatedBytes- 00:07:13.167 [2024-05-15 05:32:02.978764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.167 [2024-05-15 05:32:02.978791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.167 #16 NEW cov: 12078 ft: 13664 corp: 6/278b lim: 100 exec/s: 0 rss: 70Mb L: 39/67 MS: 2 InsertByte-CrossOver- 00:07:13.167 [2024-05-15 05:32:03.018992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.167 [2024-05-15 05:32:03.019018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.167 [2024-05-15 05:32:03.019059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.167 [2024-05-15 05:32:03.019073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.167 #17 NEW cov: 12078 ft: 13745 corp: 7/324b lim: 100 exec/s: 0 rss: 71Mb L: 46/67 MS: 1 InsertByte- 00:07:13.167 [2024-05-15 05:32:03.068991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.167 [2024-05-15 05:32:03.069017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.167 #23 NEW cov: 12078 ft: 13777 corp: 8/363b lim: 100 exec/s: 0 rss: 71Mb L: 39/67 MS: 1 ChangeASCIIInt- 00:07:13.167 [2024-05-15 05:32:03.119326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.167 [2024-05-15 05:32:03.119353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.167 [2024-05-15 05:32:03.119405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.167 [2024-05-15 05:32:03.119417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.167 [2024-05-15 05:32:03.119472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.167 [2024-05-15 05:32:03.119490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.167 #24 NEW cov: 12078 ft: 13881 corp: 9/423b lim: 100 exec/s: 0 rss: 71Mb L: 60/67 MS: 1 ShuffleBytes- 00:07:13.167 [2024-05-15 05:32:03.159461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.167 [2024-05-15 05:32:03.159497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.167 [2024-05-15 05:32:03.159543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.167 [2024-05-15 05:32:03.159557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.167 [2024-05-15 05:32:03.159611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.167 [2024-05-15 05:32:03.159625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.426 #25 NEW cov: 12078 ft: 13921 corp: 10/490b lim: 100 exec/s: 0 rss: 71Mb L: 67/67 MS: 1 ShuffleBytes- 00:07:13.426 [2024-05-15 05:32:03.209491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.426 [2024-05-15 05:32:03.209518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.209550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.426 [2024-05-15 05:32:03.209565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.426 #26 NEW cov: 12078 ft: 13944 corp: 11/535b lim: 100 exec/s: 0 rss: 71Mb L: 45/67 MS: 1 ChangeBit- 00:07:13.426 [2024-05-15 05:32:03.249723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.426 [2024-05-15 05:32:03.249749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.249791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.426 [2024-05-15 05:32:03.249805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.249859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.426 [2024-05-15 05:32:03.249873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.426 #27 NEW cov: 12078 ft: 14011 corp: 12/608b lim: 100 exec/s: 0 rss: 71Mb L: 73/73 MS: 1 InsertRepeatedBytes- 00:07:13.426 [2024-05-15 05:32:03.289805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.426 [2024-05-15 05:32:03.289831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.289878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.426 [2024-05-15 05:32:03.289890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.289944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.426 [2024-05-15 05:32:03.289959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.426 #28 NEW cov: 12078 ft: 14059 corp: 13/680b lim: 100 exec/s: 0 rss: 71Mb L: 72/73 MS: 1 CrossOver- 00:07:13.426 [2024-05-15 05:32:03.339997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.426 [2024-05-15 05:32:03.340022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.340052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.426 [2024-05-15 05:32:03.340068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.340122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.426 [2024-05-15 05:32:03.340138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.426 #29 NEW cov: 12078 ft: 14093 corp: 14/742b lim: 100 exec/s: 0 rss: 71Mb L: 62/73 MS: 1 EraseBytes- 00:07:13.426 [2024-05-15 05:32:03.379858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.426 [2024-05-15 05:32:03.379885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.426 #35 NEW cov: 12078 ft: 14109 corp: 15/775b lim: 100 exec/s: 0 rss: 71Mb L: 33/73 MS: 1 EraseBytes- 00:07:13.426 [2024-05-15 05:32:03.420228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.426 [2024-05-15 05:32:03.420256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.420290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.426 [2024-05-15 05:32:03.420304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.426 [2024-05-15 05:32:03.420357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.426 [2024-05-15 05:32:03.420372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.685 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:13.685 #36 NEW cov: 12101 ft: 14145 corp: 16/849b lim: 100 exec/s: 0 rss: 72Mb L: 74/74 MS: 1 InsertByte- 00:07:13.685 [2024-05-15 05:32:03.470264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.685 [2024-05-15 05:32:03.470292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.470335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.685 [2024-05-15 05:32:03.470349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.685 #37 NEW cov: 12101 ft: 14156 corp: 17/894b lim: 100 exec/s: 0 rss: 72Mb L: 45/74 MS: 1 CopyPart- 00:07:13.685 [2024-05-15 05:32:03.510476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.685 [2024-05-15 05:32:03.510503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.510544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.685 [2024-05-15 05:32:03.510557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.510612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.685 [2024-05-15 05:32:03.510626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.685 #38 NEW cov: 12101 ft: 14206 corp: 18/962b lim: 100 exec/s: 38 rss: 72Mb L: 68/74 MS: 1 InsertByte- 00:07:13.685 [2024-05-15 05:32:03.550555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.685 [2024-05-15 05:32:03.550582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.550617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.685 [2024-05-15 05:32:03.550631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.550685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.685 [2024-05-15 05:32:03.550700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.685 #39 NEW cov: 12101 ft: 14217 corp: 19/1032b lim: 100 exec/s: 39 rss: 72Mb L: 70/74 MS: 1 CMP- DE: "\377\377\377\005"- 00:07:13.685 [2024-05-15 05:32:03.600764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.685 [2024-05-15 05:32:03.600790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.600823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.685 [2024-05-15 05:32:03.600838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.600893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.685 [2024-05-15 05:32:03.600908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.685 #40 NEW cov: 12101 ft: 14263 corp: 20/1106b lim: 100 exec/s: 40 rss: 72Mb L: 74/74 MS: 1 InsertByte- 00:07:13.685 [2024-05-15 05:32:03.640705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.685 [2024-05-15 05:32:03.640731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.640765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.685 [2024-05-15 05:32:03.640780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.685 #41 NEW cov: 12101 ft: 14317 corp: 21/1148b lim: 100 exec/s: 41 rss: 72Mb L: 42/74 MS: 1 EraseBytes- 00:07:13.685 [2024-05-15 05:32:03.691013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.685 [2024-05-15 05:32:03.691040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.691082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.685 [2024-05-15 05:32:03.691096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.685 [2024-05-15 05:32:03.691151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.685 [2024-05-15 05:32:03.691167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.944 #42 NEW cov: 12101 ft: 14376 corp: 22/1218b lim: 100 exec/s: 42 rss: 72Mb L: 70/74 MS: 1 CopyPart- 00:07:13.945 [2024-05-15 05:32:03.741146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.945 [2024-05-15 05:32:03.741173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.945 [2024-05-15 05:32:03.741217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.945 [2024-05-15 05:32:03.741230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.945 [2024-05-15 05:32:03.741284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.945 [2024-05-15 05:32:03.741302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.945 #43 NEW cov: 12101 ft: 14388 corp: 23/1280b lim: 100 exec/s: 43 rss: 72Mb L: 62/74 MS: 1 ChangeByte- 00:07:13.945 [2024-05-15 05:32:03.791154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.945 [2024-05-15 05:32:03.791182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.945 [2024-05-15 05:32:03.791212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.945 [2024-05-15 05:32:03.791226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.945 #44 NEW cov: 12101 ft: 14410 corp: 24/1320b lim: 100 exec/s: 44 rss: 72Mb L: 40/74 MS: 1 InsertByte- 00:07:13.945 [2024-05-15 05:32:03.831171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.945 [2024-05-15 05:32:03.831198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.945 #45 NEW cov: 12101 ft: 14416 corp: 25/1354b lim: 100 exec/s: 45 rss: 72Mb L: 34/74 MS: 1 InsertByte- 00:07:13.945 [2024-05-15 05:32:03.881555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.945 [2024-05-15 05:32:03.881582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.945 [2024-05-15 05:32:03.881628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.945 [2024-05-15 05:32:03.881643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.945 [2024-05-15 05:32:03.881696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:13.945 [2024-05-15 05:32:03.881711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.945 #46 NEW cov: 12101 ft: 14453 corp: 26/1414b lim: 100 exec/s: 46 rss: 72Mb L: 60/74 MS: 1 ShuffleBytes- 00:07:13.945 [2024-05-15 05:32:03.921489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:13.945 [2024-05-15 05:32:03.921516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.945 [2024-05-15 05:32:03.921550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:13.945 [2024-05-15 05:32:03.921564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.945 #47 NEW cov: 12101 ft: 14526 corp: 27/1459b lim: 100 exec/s: 47 rss: 72Mb L: 45/74 MS: 1 ChangeByte- 00:07:14.204 [2024-05-15 05:32:03.971727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.204 [2024-05-15 05:32:03.971753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.204 [2024-05-15 05:32:03.971787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.204 [2024-05-15 05:32:03.971801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.204 [2024-05-15 05:32:03.971854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.204 [2024-05-15 05:32:03.971869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.204 #48 NEW cov: 12101 ft: 14590 corp: 28/1522b lim: 100 exec/s: 48 rss: 72Mb L: 63/74 MS: 1 EraseBytes- 00:07:14.204 [2024-05-15 05:32:04.021812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.204 [2024-05-15 05:32:04.021842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.204 [2024-05-15 05:32:04.021893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.204 [2024-05-15 05:32:04.021909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.204 #49 NEW cov: 12101 ft: 14612 corp: 29/1567b lim: 100 exec/s: 49 rss: 72Mb L: 45/74 MS: 1 CrossOver- 00:07:14.204 [2024-05-15 05:32:04.062028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.204 [2024-05-15 05:32:04.062055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.204 [2024-05-15 05:32:04.062102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.204 [2024-05-15 05:32:04.062114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.062170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.205 [2024-05-15 05:32:04.062183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.205 #50 NEW cov: 12101 ft: 14651 corp: 30/1627b lim: 100 exec/s: 50 rss: 73Mb L: 60/74 MS: 1 ChangeBinInt- 00:07:14.205 [2024-05-15 05:32:04.112202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.205 [2024-05-15 05:32:04.112229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.112266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.205 [2024-05-15 05:32:04.112280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.112332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.205 [2024-05-15 05:32:04.112347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.205 #51 NEW cov: 12101 ft: 14659 corp: 31/1696b lim: 100 exec/s: 51 rss: 73Mb L: 69/74 MS: 1 EraseBytes- 00:07:14.205 [2024-05-15 05:32:04.152169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.205 [2024-05-15 05:32:04.152197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.152244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.205 [2024-05-15 05:32:04.152257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.205 #52 NEW cov: 12101 ft: 14689 corp: 32/1741b lim: 100 exec/s: 52 rss: 73Mb L: 45/74 MS: 1 CopyPart- 00:07:14.205 [2024-05-15 05:32:04.192527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.205 [2024-05-15 05:32:04.192554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.192602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.205 [2024-05-15 05:32:04.192616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.192668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.205 [2024-05-15 05:32:04.192683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.205 [2024-05-15 05:32:04.192734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:14.205 [2024-05-15 05:32:04.192752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.205 #53 NEW cov: 12101 ft: 15007 corp: 33/1840b lim: 100 exec/s: 53 rss: 73Mb L: 99/99 MS: 1 CrossOver- 00:07:14.464 [2024-05-15 05:32:04.242308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.464 [2024-05-15 05:32:04.242336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.464 #54 NEW cov: 12101 ft: 15056 corp: 34/1879b lim: 100 exec/s: 54 rss: 73Mb L: 39/99 MS: 1 ShuffleBytes- 00:07:14.464 [2024-05-15 05:32:04.282674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.464 [2024-05-15 05:32:04.282701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.282744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.464 [2024-05-15 05:32:04.282758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.282811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.464 [2024-05-15 05:32:04.282826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.464 #55 NEW cov: 12101 ft: 15058 corp: 35/1946b lim: 100 exec/s: 55 rss: 73Mb L: 67/99 MS: 1 ChangeBinInt- 00:07:14.464 [2024-05-15 05:32:04.332785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.464 [2024-05-15 05:32:04.332812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.332850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.464 [2024-05-15 05:32:04.332865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.332919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.464 [2024-05-15 05:32:04.332934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.464 #56 NEW cov: 12101 ft: 15063 corp: 36/2013b lim: 100 exec/s: 56 rss: 73Mb L: 67/99 MS: 1 ChangeBit- 00:07:14.464 [2024-05-15 05:32:04.382947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.464 [2024-05-15 05:32:04.382974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.383012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.464 [2024-05-15 05:32:04.383027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.383078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.464 [2024-05-15 05:32:04.383093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.464 #57 NEW cov: 12101 ft: 15071 corp: 37/2085b lim: 100 exec/s: 57 rss: 73Mb L: 72/99 MS: 1 PersAutoDict- DE: "\377\377\377\005"- 00:07:14.464 [2024-05-15 05:32:04.423070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.464 [2024-05-15 05:32:04.423097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.464 [2024-05-15 05:32:04.423145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.464 [2024-05-15 05:32:04.423162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.465 [2024-05-15 05:32:04.423216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.465 [2024-05-15 05:32:04.423231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.465 #58 NEW cov: 12101 ft: 15077 corp: 38/2145b lim: 100 exec/s: 58 rss: 73Mb L: 60/99 MS: 1 ChangeASCIIInt- 00:07:14.465 [2024-05-15 05:32:04.473174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.465 [2024-05-15 05:32:04.473199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.465 [2024-05-15 05:32:04.473247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.465 [2024-05-15 05:32:04.473261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.465 [2024-05-15 05:32:04.473315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:14.465 [2024-05-15 05:32:04.473330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.724 #59 NEW cov: 12101 ft: 15095 corp: 39/2219b lim: 100 exec/s: 59 rss: 73Mb L: 74/99 MS: 1 CrossOver- 00:07:14.724 [2024-05-15 05:32:04.523238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:14.724 [2024-05-15 05:32:04.523264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.724 [2024-05-15 05:32:04.523300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:14.724 [2024-05-15 05:32:04.523315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.724 #60 NEW cov: 12101 ft: 15104 corp: 40/2266b lim: 100 exec/s: 30 rss: 73Mb L: 47/99 MS: 1 CMP- DE: "\001\205\316\317A\353\013\204"- 00:07:14.724 #60 DONE cov: 12101 ft: 15104 corp: 40/2266b lim: 100 exec/s: 30 rss: 73Mb 00:07:14.724 ###### Recommended dictionary. ###### 00:07:14.724 "\377\377\377\005" # Uses: 1 00:07:14.724 "\001\205\316\317A\353\013\204" # Uses: 0 00:07:14.724 ###### End of recommended dictionary. ###### 00:07:14.724 Done 60 runs in 2 second(s) 00:07:14.724 [2024-05-15 05:32:04.545972] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:14.725 05:32:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:14.725 [2024-05-15 05:32:04.712728] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:14.725 [2024-05-15 05:32:04.712803] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274644 ] 00:07:14.984 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.984 [2024-05-15 05:32:04.892454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.984 [2024-05-15 05:32:04.958408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.243 [2024-05-15 05:32:05.017370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.243 [2024-05-15 05:32:05.033328] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:15.243 [2024-05-15 05:32:05.033762] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:15.243 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.243 INFO: Seed: 1783623296 00:07:15.243 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:15.243 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:15.243 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:15.243 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.243 #2 INITED exec/s: 0 rss: 64Mb 00:07:15.243 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.243 This may also happen if the target rejected all inputs we tried so far 00:07:15.243 [2024-05-15 05:32:05.088777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779705906328178 len:29299 00:07:15.243 [2024-05-15 05:32:05.088810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.503 NEW_FUNC[1/685]: 0x4a2410 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:15.503 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.503 #11 NEW cov: 11835 ft: 11836 corp: 2/14b lim: 50 exec/s: 0 rss: 70Mb L: 13/13 MS: 4 InsertByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:15.503 [2024-05-15 05:32:05.399621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779379488813682 len:29299 00:07:15.503 [2024-05-15 05:32:05.399654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.503 #12 NEW cov: 11965 ft: 12408 corp: 3/27b lim: 50 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:07:15.503 [2024-05-15 05:32:05.449913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:15.503 [2024-05-15 05:32:05.449945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.503 [2024-05-15 05:32:05.449975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:15.503 [2024-05-15 05:32:05.449994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.503 [2024-05-15 05:32:05.450048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:07:15.503 [2024-05-15 05:32:05.450064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.503 #16 NEW cov: 11971 ft: 13067 corp: 4/61b lim: 50 exec/s: 0 rss: 70Mb L: 34/34 MS: 4 ShuffleBytes-ChangeByte-CrossOver-InsertRepeatedBytes- 00:07:15.503 [2024-05-15 05:32:05.489746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779379472036466 len:29299 00:07:15.503 [2024-05-15 05:32:05.489775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.503 #22 NEW cov: 12056 ft: 13440 corp: 5/74b lim: 50 exec/s: 0 rss: 70Mb L: 13/34 MS: 1 ChangeBit- 00:07:15.763 [2024-05-15 05:32:05.539964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779705906328178 len:29299 00:07:15.763 [2024-05-15 05:32:05.539992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.763 #23 NEW cov: 12056 ft: 13565 corp: 6/86b lim: 50 exec/s: 0 rss: 71Mb L: 12/34 MS: 1 EraseBytes- 00:07:15.763 [2024-05-15 05:32:05.580299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601735306958 len:52943 00:07:15.763 [2024-05-15 05:32:05.580327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.580361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.580377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.580439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.580454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.763 #24 NEW cov: 12056 ft: 13750 corp: 7/120b lim: 50 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 ChangeByte- 00:07:15.763 [2024-05-15 05:32:05.630562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1446803456761533460 len:5141 00:07:15.763 [2024-05-15 05:32:05.630591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.630634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2410216135092081684 len:52943 00:07:15.763 [2024-05-15 05:32:05.630650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.630704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.630720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.630773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.630789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.763 #25 NEW cov: 12056 ft: 14060 corp: 8/168b lim: 50 exec/s: 0 rss: 71Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:07:15.763 [2024-05-15 05:32:05.680325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779705906328077 len:29299 00:07:15.763 [2024-05-15 05:32:05.680355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.763 #26 NEW cov: 12056 ft: 14090 corp: 9/181b lim: 50 exec/s: 0 rss: 71Mb L: 13/48 MS: 1 ChangeBinInt- 00:07:15.763 [2024-05-15 05:32:05.720889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:15.763 [2024-05-15 05:32:05.720917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.720964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.720980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.721032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.721049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.721101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.721117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.763 #32 NEW cov: 12056 ft: 14104 corp: 10/225b lim: 50 exec/s: 0 rss: 71Mb L: 44/48 MS: 1 CopyPart- 00:07:15.763 [2024-05-15 05:32:05.760775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601735306958 len:52943 00:07:15.763 [2024-05-15 05:32:05.760803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.760845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:15.763 [2024-05-15 05:32:05.760860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.763 [2024-05-15 05:32:05.760916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643744462 len:52943 00:07:15.763 [2024-05-15 05:32:05.760931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.763 #33 NEW cov: 12056 ft: 14172 corp: 11/260b lim: 50 exec/s: 0 rss: 71Mb L: 35/48 MS: 1 CrossOver- 00:07:16.023 [2024-05-15 05:32:05.800876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.023 [2024-05-15 05:32:05.800906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.800940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:05.800957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.801012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902074724175498958 len:52943 00:07:16.023 [2024-05-15 05:32:05.801028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.023 #34 NEW cov: 12056 ft: 14212 corp: 12/294b lim: 50 exec/s: 0 rss: 71Mb L: 34/48 MS: 1 ChangeByte- 00:07:16.023 [2024-05-15 05:32:05.840994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601735306958 len:52943 00:07:16.023 [2024-05-15 05:32:05.841028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.841075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:05.841091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.841146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643744462 len:52943 00:07:16.023 [2024-05-15 05:32:05.841163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.023 #35 NEW cov: 12056 ft: 14261 corp: 13/329b lim: 50 exec/s: 0 rss: 71Mb L: 35/48 MS: 1 ChangeBit- 00:07:16.023 [2024-05-15 05:32:05.891134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.023 [2024-05-15 05:32:05.891162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.891196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3544841769345494065 len:52943 00:07:16.023 [2024-05-15 05:32:05.891211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.891266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:05.891282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.023 #36 NEW cov: 12056 ft: 14310 corp: 14/363b lim: 50 exec/s: 0 rss: 71Mb L: 34/48 MS: 1 ChangeBinInt- 00:07:16.023 [2024-05-15 05:32:05.931261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601735306958 len:52431 00:07:16.023 [2024-05-15 05:32:05.931289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.931322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:05.931337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.931397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643744462 len:52943 00:07:16.023 [2024-05-15 05:32:05.931411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.023 #37 NEW cov: 12056 ft: 14359 corp: 15/398b lim: 50 exec/s: 0 rss: 71Mb L: 35/48 MS: 1 ChangeBinInt- 00:07:16.023 [2024-05-15 05:32:05.981239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.023 [2024-05-15 05:32:05.981267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:05.981301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:05.981316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.023 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:16.023 #38 NEW cov: 12079 ft: 14608 corp: 16/419b lim: 50 exec/s: 0 rss: 71Mb L: 21/48 MS: 1 EraseBytes- 00:07:16.023 [2024-05-15 05:32:06.021603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.023 [2024-05-15 05:32:06.021635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:06.021672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:06.021688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:06.021738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643790030 len:52943 00:07:16.023 [2024-05-15 05:32:06.021755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.023 [2024-05-15 05:32:06.021810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14902075604643794638 len:52943 00:07:16.023 [2024-05-15 05:32:06.021825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.283 #39 NEW cov: 12079 ft: 14630 corp: 17/464b lim: 50 exec/s: 0 rss: 72Mb L: 45/48 MS: 1 InsertByte- 00:07:16.283 [2024-05-15 05:32:06.071519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14915921962352168654 len:65536 00:07:16.283 [2024-05-15 05:32:06.071548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.283 [2024-05-15 05:32:06.071580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:16.283 [2024-05-15 05:32:06.071594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.283 #42 NEW cov: 12079 ft: 14708 corp: 18/493b lim: 50 exec/s: 42 rss: 72Mb L: 29/48 MS: 3 CrossOver-ChangeBit-InsertRepeatedBytes- 00:07:16.283 [2024-05-15 05:32:06.121648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18377229688891965439 len:2314 00:07:16.283 [2024-05-15 05:32:06.121676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.283 [2024-05-15 05:32:06.121719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:651061555542690057 len:2314 00:07:16.283 [2024-05-15 05:32:06.121734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.283 #50 NEW cov: 12079 ft: 14740 corp: 19/518b lim: 50 exec/s: 50 rss: 72Mb L: 25/48 MS: 3 InsertByte-CMP-InsertRepeatedBytes- DE: "\377\377\377\377"- 00:07:16.283 [2024-05-15 05:32:06.161773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069431361552 len:2816 00:07:16.283 [2024-05-15 05:32:06.161801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.283 [2024-05-15 05:32:06.161831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:16.283 [2024-05-15 05:32:06.161847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.283 #53 NEW cov: 12079 ft: 14750 corp: 20/539b lim: 50 exec/s: 53 rss: 72Mb L: 21/48 MS: 3 CMP-PersAutoDict-InsertRepeatedBytes- DE: "\001\000\000\020"-"\377\377\377\377"- 00:07:16.283 [2024-05-15 05:32:06.201800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:5118941457080076103 len:8739 00:07:16.283 [2024-05-15 05:32:06.201828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.283 #56 NEW cov: 12079 ft: 14760 corp: 21/554b lim: 50 exec/s: 56 rss: 72Mb L: 15/48 MS: 3 InsertRepeatedBytes-CopyPart-InsertRepeatedBytes- 00:07:16.283 [2024-05-15 05:32:06.241905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446588436970405887 len:29299 00:07:16.283 [2024-05-15 05:32:06.241933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.283 #58 NEW cov: 12079 ft: 14792 corp: 22/567b lim: 50 exec/s: 58 rss: 72Mb L: 13/48 MS: 2 EraseBytes-PersAutoDict- DE: "\377\377\377\377"- 00:07:16.283 [2024-05-15 05:32:06.292028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10199934569301578482 len:29299 00:07:16.283 [2024-05-15 05:32:06.292057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.542 #64 NEW cov: 12079 ft: 14806 corp: 23/580b lim: 50 exec/s: 64 rss: 72Mb L: 13/48 MS: 1 ChangeBinInt- 00:07:16.542 [2024-05-15 05:32:06.342204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779379488879218 len:29299 00:07:16.542 [2024-05-15 05:32:06.342232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.542 #65 NEW cov: 12079 ft: 14809 corp: 24/593b lim: 50 exec/s: 65 rss: 72Mb L: 13/48 MS: 1 ChangeBit- 00:07:16.542 [2024-05-15 05:32:06.382282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779705902985842 len:9843 00:07:16.542 [2024-05-15 05:32:06.382311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.542 #66 NEW cov: 12079 ft: 14901 corp: 25/607b lim: 50 exec/s: 66 rss: 72Mb L: 14/48 MS: 1 InsertByte- 00:07:16.542 [2024-05-15 05:32:06.422653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779379488813682 len:29299 00:07:16.542 [2024-05-15 05:32:06.422681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.542 [2024-05-15 05:32:06.422722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071343964159 len:65536 00:07:16.542 [2024-05-15 05:32:06.422738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.542 [2024-05-15 05:32:06.422793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:16.542 [2024-05-15 05:32:06.422808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.542 #67 NEW cov: 12079 ft: 14934 corp: 26/642b lim: 50 exec/s: 67 rss: 72Mb L: 35/48 MS: 1 InsertRepeatedBytes- 00:07:16.542 [2024-05-15 05:32:06.462723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.542 [2024-05-15 05:32:06.462751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.542 [2024-05-15 05:32:06.462785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.542 [2024-05-15 05:32:06.462800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.542 [2024-05-15 05:32:06.462854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794638 len:52943 00:07:16.542 [2024-05-15 05:32:06.462870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.542 #68 NEW cov: 12079 ft: 14938 corp: 27/677b lim: 50 exec/s: 68 rss: 72Mb L: 35/48 MS: 1 InsertByte- 00:07:16.542 [2024-05-15 05:32:06.502736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073700274802 len:65536 00:07:16.542 [2024-05-15 05:32:06.502768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.543 [2024-05-15 05:32:06.502809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65395 00:07:16.543 [2024-05-15 05:32:06.502824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.543 #69 NEW cov: 12079 ft: 14979 corp: 28/705b lim: 50 exec/s: 69 rss: 72Mb L: 28/48 MS: 1 InsertRepeatedBytes- 00:07:16.543 [2024-05-15 05:32:06.543116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446540973286817791 len:18248 00:07:16.543 [2024-05-15 05:32:06.543147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.543 [2024-05-15 05:32:06.543186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:5136152271503443783 len:18248 00:07:16.543 [2024-05-15 05:32:06.543203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.543 [2024-05-15 05:32:06.543255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:5136152271503443783 len:18248 00:07:16.543 [2024-05-15 05:32:06.543271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.543 [2024-05-15 05:32:06.543324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5136152271503443783 len:29299 00:07:16.543 [2024-05-15 05:32:06.543340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.802 #70 NEW cov: 12079 ft: 15017 corp: 29/750b lim: 50 exec/s: 70 rss: 72Mb L: 45/48 MS: 1 InsertRepeatedBytes- 00:07:16.802 [2024-05-15 05:32:06.593042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073700274802 len:65536 00:07:16.802 [2024-05-15 05:32:06.593073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.593104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65395 00:07:16.802 [2024-05-15 05:32:06.593120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.802 #71 NEW cov: 12079 ft: 15048 corp: 30/778b lim: 50 exec/s: 71 rss: 72Mb L: 28/48 MS: 1 CopyPart- 00:07:16.802 [2024-05-15 05:32:06.643324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.802 [2024-05-15 05:32:06.643353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.643390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.802 [2024-05-15 05:32:06.643406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.643463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:8272776953154424526 len:52943 00:07:16.802 [2024-05-15 05:32:06.643479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.802 #72 NEW cov: 12079 ft: 15058 corp: 31/813b lim: 50 exec/s: 72 rss: 72Mb L: 35/48 MS: 1 CopyPart- 00:07:16.802 [2024-05-15 05:32:06.693592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.802 [2024-05-15 05:32:06.693621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.693657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52943 00:07:16.802 [2024-05-15 05:32:06.693673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.693730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643790030 len:52943 00:07:16.802 [2024-05-15 05:32:06.693746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.693801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2479187441876992 len:52943 00:07:16.802 [2024-05-15 05:32:06.693817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:16.802 #73 NEW cov: 12079 ft: 15063 corp: 32/858b lim: 50 exec/s: 73 rss: 72Mb L: 45/48 MS: 1 CMP- DE: "\000\000\000\010"- 00:07:16.802 [2024-05-15 05:32:06.743761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.802 [2024-05-15 05:32:06.743790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.743832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:53199 00:07:16.802 [2024-05-15 05:32:06.743849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.743902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902074724175498958 len:52943 00:07:16.802 [2024-05-15 05:32:06.743918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:16.802 #74 NEW cov: 12079 ft: 15099 corp: 33/892b lim: 50 exec/s: 74 rss: 72Mb L: 34/48 MS: 1 ChangeBit- 00:07:16.802 [2024-05-15 05:32:06.793750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:16.802 [2024-05-15 05:32:06.793778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.793814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794620 len:52943 00:07:16.802 [2024-05-15 05:32:06.793830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:16.802 [2024-05-15 05:32:06.793888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:9684325944832 len:52943 00:07:16.802 [2024-05-15 05:32:06.793904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.061 #75 NEW cov: 12079 ft: 15102 corp: 34/928b lim: 50 exec/s: 75 rss: 73Mb L: 36/48 MS: 1 EraseBytes- 00:07:17.061 [2024-05-15 05:32:06.843918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601735306958 len:52431 00:07:17.061 [2024-05-15 05:32:06.843947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.061 [2024-05-15 05:32:06.843980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:6095 00:07:17.061 [2024-05-15 05:32:06.843996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.061 [2024-05-15 05:32:06.844050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643744462 len:52943 00:07:17.061 [2024-05-15 05:32:06.844071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.062 #76 NEW cov: 12079 ft: 15121 corp: 35/963b lim: 50 exec/s: 76 rss: 73Mb L: 35/48 MS: 1 ChangeByte- 00:07:17.062 [2024-05-15 05:32:06.893931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:17.062 [2024-05-15 05:32:06.893960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:06.893991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604660571854 len:52943 00:07:17.062 [2024-05-15 05:32:06.894008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.062 #77 NEW cov: 12079 ft: 15135 corp: 36/984b lim: 50 exec/s: 77 rss: 73Mb L: 21/48 MS: 1 CrossOver- 00:07:17.062 [2024-05-15 05:32:06.933935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8246779705906328182 len:29299 00:07:17.062 [2024-05-15 05:32:06.933965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.062 #78 NEW cov: 12079 ft: 15149 corp: 37/996b lim: 50 exec/s: 78 rss: 73Mb L: 12/48 MS: 1 ChangeBit- 00:07:17.062 [2024-05-15 05:32:06.974263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601835970254 len:52943 00:07:17.062 [2024-05-15 05:32:06.974293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:06.974320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794620 len:52943 00:07:17.062 [2024-05-15 05:32:06.974336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:06.974395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075601174137038 len:52943 00:07:17.062 [2024-05-15 05:32:06.974412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.062 #79 NEW cov: 12079 ft: 15176 corp: 38/1028b lim: 50 exec/s: 79 rss: 73Mb L: 32/48 MS: 1 EraseBytes- 00:07:17.062 [2024-05-15 05:32:07.024391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2842561703268515839 len:52943 00:07:17.062 [2024-05-15 05:32:07.024419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:07.024455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:65536 00:07:17.062 [2024-05-15 05:32:07.024470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:07.024526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14863927466623749838 len:18383 00:07:17.062 [2024-05-15 05:32:07.024541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.062 #80 NEW cov: 12079 ft: 15236 corp: 39/1058b lim: 50 exec/s: 80 rss: 73Mb L: 30/48 MS: 1 CrossOver- 00:07:17.062 [2024-05-15 05:32:07.074568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14902075601735306958 len:52431 00:07:17.062 [2024-05-15 05:32:07.074597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:07.074634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14902075604643794638 len:52760 00:07:17.062 [2024-05-15 05:32:07.074653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:17.062 [2024-05-15 05:32:07.074706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14902075604643794442 len:52943 00:07:17.062 [2024-05-15 05:32:07.074724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:17.321 #81 NEW cov: 12079 ft: 15249 corp: 40/1094b lim: 50 exec/s: 40 rss: 73Mb L: 36/48 MS: 1 CopyPart- 00:07:17.321 #81 DONE cov: 12079 ft: 15249 corp: 40/1094b lim: 50 exec/s: 40 rss: 73Mb 00:07:17.321 ###### Recommended dictionary. ###### 00:07:17.321 "\377\377\377\377" # Uses: 2 00:07:17.321 "\001\000\000\020" # Uses: 0 00:07:17.321 "\000\000\000\010" # Uses: 0 00:07:17.321 ###### End of recommended dictionary. ###### 00:07:17.321 Done 81 runs in 2 second(s) 00:07:17.321 [2024-05-15 05:32:07.104963] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:17.321 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.322 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:17.322 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:17.322 05:32:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:17.322 [2024-05-15 05:32:07.272399] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:17.322 [2024-05-15 05:32:07.272482] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275179 ] 00:07:17.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.581 [2024-05-15 05:32:07.448935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.581 [2024-05-15 05:32:07.515291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.581 [2024-05-15 05:32:07.574501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.581 [2024-05-15 05:32:07.590453] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:17.581 [2024-05-15 05:32:07.590871] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:17.840 INFO: Running with entropic power schedule (0xFF, 100). 00:07:17.840 INFO: Seed: 43663402 00:07:17.840 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:17.840 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:17.840 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:17.840 INFO: A corpus is not provided, starting from an empty corpus 00:07:17.840 #2 INITED exec/s: 0 rss: 63Mb 00:07:17.840 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:17.840 This may also happen if the target rejected all inputs we tried so far 00:07:17.840 [2024-05-15 05:32:07.640019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:17.840 [2024-05-15 05:32:07.640050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:17.840 [2024-05-15 05:32:07.640108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:17.840 [2024-05-15 05:32:07.640123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.100 NEW_FUNC[1/687]: 0x4a3fd0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:18.101 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.101 #5 NEW cov: 11893 ft: 11890 corp: 2/38b lim: 90 exec/s: 0 rss: 70Mb L: 37/37 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:07:18.101 [2024-05-15 05:32:07.951004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.101 [2024-05-15 05:32:07.951044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:07.951109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.101 [2024-05-15 05:32:07.951129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:07.951192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.101 [2024-05-15 05:32:07.951213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.101 #7 NEW cov: 12023 ft: 12800 corp: 3/107b lim: 90 exec/s: 0 rss: 70Mb L: 69/69 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:18.101 [2024-05-15 05:32:07.991105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.101 [2024-05-15 05:32:07.991134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:07.991171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.101 [2024-05-15 05:32:07.991186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:07.991239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.101 [2024-05-15 05:32:07.991256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:07.991311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.101 [2024-05-15 05:32:07.991326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.101 #8 NEW cov: 12029 ft: 13377 corp: 4/183b lim: 90 exec/s: 0 rss: 70Mb L: 76/76 MS: 1 CopyPart- 00:07:18.101 [2024-05-15 05:32:08.041003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.101 [2024-05-15 05:32:08.041031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:08.041069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.101 [2024-05-15 05:32:08.041084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.101 #9 NEW cov: 12114 ft: 13594 corp: 5/221b lim: 90 exec/s: 0 rss: 70Mb L: 38/76 MS: 1 InsertByte- 00:07:18.101 [2024-05-15 05:32:08.091456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.101 [2024-05-15 05:32:08.091484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:08.091521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.101 [2024-05-15 05:32:08.091536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:08.091591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.101 [2024-05-15 05:32:08.091607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.101 [2024-05-15 05:32:08.091660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.101 [2024-05-15 05:32:08.091675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.361 #10 NEW cov: 12114 ft: 13683 corp: 6/297b lim: 90 exec/s: 0 rss: 70Mb L: 76/76 MS: 1 ChangeByte- 00:07:18.361 [2024-05-15 05:32:08.141618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.361 [2024-05-15 05:32:08.141648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.141689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.361 [2024-05-15 05:32:08.141705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.141762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.361 [2024-05-15 05:32:08.141778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.141832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.361 [2024-05-15 05:32:08.141847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.361 #16 NEW cov: 12114 ft: 13833 corp: 7/381b lim: 90 exec/s: 0 rss: 70Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:18.361 [2024-05-15 05:32:08.181705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.361 [2024-05-15 05:32:08.181733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.181780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.361 [2024-05-15 05:32:08.181797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.181852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.361 [2024-05-15 05:32:08.181872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.181927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.361 [2024-05-15 05:32:08.181943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.361 #17 NEW cov: 12114 ft: 13919 corp: 8/457b lim: 90 exec/s: 0 rss: 71Mb L: 76/84 MS: 1 CrossOver- 00:07:18.361 [2024-05-15 05:32:08.221808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.361 [2024-05-15 05:32:08.221836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.221883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.361 [2024-05-15 05:32:08.221898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.221951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.361 [2024-05-15 05:32:08.221966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.222021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.361 [2024-05-15 05:32:08.222035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.361 #23 NEW cov: 12114 ft: 13952 corp: 9/541b lim: 90 exec/s: 0 rss: 71Mb L: 84/84 MS: 1 ChangeBinInt- 00:07:18.361 [2024-05-15 05:32:08.271819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.361 [2024-05-15 05:32:08.271847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.271889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.361 [2024-05-15 05:32:08.271905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.271960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.361 [2024-05-15 05:32:08.271977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.361 #24 NEW cov: 12114 ft: 13990 corp: 10/610b lim: 90 exec/s: 0 rss: 71Mb L: 69/84 MS: 1 ChangeBit- 00:07:18.361 [2024-05-15 05:32:08.312069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.361 [2024-05-15 05:32:08.312098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.312144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.361 [2024-05-15 05:32:08.312160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.312215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.361 [2024-05-15 05:32:08.312232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.361 [2024-05-15 05:32:08.312288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.361 [2024-05-15 05:32:08.312303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.361 #25 NEW cov: 12114 ft: 14042 corp: 11/694b lim: 90 exec/s: 0 rss: 71Mb L: 84/84 MS: 1 ChangeBinInt- 00:07:18.361 [2024-05-15 05:32:08.362203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.361 [2024-05-15 05:32:08.362233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.362 [2024-05-15 05:32:08.362275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.362 [2024-05-15 05:32:08.362291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.362 [2024-05-15 05:32:08.362341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.362 [2024-05-15 05:32:08.362357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.362 [2024-05-15 05:32:08.362409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.362 [2024-05-15 05:32:08.362425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.628 #26 NEW cov: 12114 ft: 14062 corp: 12/770b lim: 90 exec/s: 0 rss: 71Mb L: 76/84 MS: 1 ChangeByte- 00:07:18.628 [2024-05-15 05:32:08.402486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.628 [2024-05-15 05:32:08.402516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.628 [2024-05-15 05:32:08.402568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.628 [2024-05-15 05:32:08.402583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.628 [2024-05-15 05:32:08.402638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.628 [2024-05-15 05:32:08.402654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.628 [2024-05-15 05:32:08.402706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.628 [2024-05-15 05:32:08.402721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.628 [2024-05-15 05:32:08.402774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:18.628 [2024-05-15 05:32:08.402790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:18.629 #27 NEW cov: 12114 ft: 14122 corp: 13/860b lim: 90 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:18.629 [2024-05-15 05:32:08.452188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.629 [2024-05-15 05:32:08.452216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.452246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.629 [2024-05-15 05:32:08.452262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.629 #28 NEW cov: 12114 ft: 14175 corp: 14/897b lim: 90 exec/s: 0 rss: 71Mb L: 37/90 MS: 1 ChangeBinInt- 00:07:18.629 [2024-05-15 05:32:08.492260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.629 [2024-05-15 05:32:08.492288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.492329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.629 [2024-05-15 05:32:08.492345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.629 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:18.629 #29 NEW cov: 12137 ft: 14247 corp: 15/935b lim: 90 exec/s: 0 rss: 71Mb L: 38/90 MS: 1 ShuffleBytes- 00:07:18.629 [2024-05-15 05:32:08.542428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.629 [2024-05-15 05:32:08.542456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.542491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.629 [2024-05-15 05:32:08.542507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.629 #30 NEW cov: 12137 ft: 14268 corp: 16/972b lim: 90 exec/s: 0 rss: 71Mb L: 37/90 MS: 1 ChangeBinInt- 00:07:18.629 [2024-05-15 05:32:08.592861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.629 [2024-05-15 05:32:08.592890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.592937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.629 [2024-05-15 05:32:08.592954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.593010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.629 [2024-05-15 05:32:08.593026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.593080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.629 [2024-05-15 05:32:08.593096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.629 #40 NEW cov: 12137 ft: 14296 corp: 17/1045b lim: 90 exec/s: 0 rss: 71Mb L: 73/90 MS: 5 CopyPart-ChangeByte-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:18.629 [2024-05-15 05:32:08.632667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.629 [2024-05-15 05:32:08.632695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.629 [2024-05-15 05:32:08.632728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.629 [2024-05-15 05:32:08.632745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.889 #41 NEW cov: 12137 ft: 14324 corp: 18/1088b lim: 90 exec/s: 41 rss: 71Mb L: 43/90 MS: 1 CrossOver- 00:07:18.889 [2024-05-15 05:32:08.683108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.889 [2024-05-15 05:32:08.683137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.683179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.889 [2024-05-15 05:32:08.683195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.683247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.889 [2024-05-15 05:32:08.683262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.683315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.889 [2024-05-15 05:32:08.683330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.889 #42 NEW cov: 12137 ft: 14336 corp: 19/1172b lim: 90 exec/s: 42 rss: 72Mb L: 84/90 MS: 1 ChangeBit- 00:07:18.889 [2024-05-15 05:32:08.733225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.889 [2024-05-15 05:32:08.733252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.733301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.889 [2024-05-15 05:32:08.733316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.733372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.889 [2024-05-15 05:32:08.733393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.733451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.889 [2024-05-15 05:32:08.733467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.889 #43 NEW cov: 12137 ft: 14356 corp: 20/1248b lim: 90 exec/s: 43 rss: 72Mb L: 76/90 MS: 1 CrossOver- 00:07:18.889 [2024-05-15 05:32:08.773317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.889 [2024-05-15 05:32:08.773345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.773401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.889 [2024-05-15 05:32:08.773417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.773470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.889 [2024-05-15 05:32:08.773486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.773539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.889 [2024-05-15 05:32:08.773553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.889 #44 NEW cov: 12137 ft: 14370 corp: 21/1323b lim: 90 exec/s: 44 rss: 72Mb L: 75/90 MS: 1 CrossOver- 00:07:18.889 [2024-05-15 05:32:08.823525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.889 [2024-05-15 05:32:08.823553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.823600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.889 [2024-05-15 05:32:08.823616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.823669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.889 [2024-05-15 05:32:08.823685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.889 [2024-05-15 05:32:08.823737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.889 [2024-05-15 05:32:08.823751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.890 #50 NEW cov: 12137 ft: 14377 corp: 22/1407b lim: 90 exec/s: 50 rss: 72Mb L: 84/90 MS: 1 ShuffleBytes- 00:07:18.890 [2024-05-15 05:32:08.863587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.890 [2024-05-15 05:32:08.863619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.890 [2024-05-15 05:32:08.863655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.890 [2024-05-15 05:32:08.863671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:18.890 [2024-05-15 05:32:08.863726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:18.890 [2024-05-15 05:32:08.863742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:18.890 [2024-05-15 05:32:08.863797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:18.890 [2024-05-15 05:32:08.863814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:18.890 #51 NEW cov: 12137 ft: 14390 corp: 23/1489b lim: 90 exec/s: 51 rss: 72Mb L: 82/90 MS: 1 InsertRepeatedBytes- 00:07:18.890 [2024-05-15 05:32:08.903413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:18.890 [2024-05-15 05:32:08.903440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:18.890 [2024-05-15 05:32:08.903495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:18.890 [2024-05-15 05:32:08.903512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.149 #52 NEW cov: 12137 ft: 14414 corp: 24/1527b lim: 90 exec/s: 52 rss: 72Mb L: 38/90 MS: 1 ShuffleBytes- 00:07:19.149 [2024-05-15 05:32:08.943761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.149 [2024-05-15 05:32:08.943788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.943834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.149 [2024-05-15 05:32:08.943850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.943903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.149 [2024-05-15 05:32:08.943918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.943974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.149 [2024-05-15 05:32:08.943990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.149 #53 NEW cov: 12137 ft: 14434 corp: 25/1616b lim: 90 exec/s: 53 rss: 72Mb L: 89/90 MS: 1 InsertRepeatedBytes- 00:07:19.149 [2024-05-15 05:32:08.984020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.149 [2024-05-15 05:32:08.984048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.984098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.149 [2024-05-15 05:32:08.984114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.984166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.149 [2024-05-15 05:32:08.984180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.984239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.149 [2024-05-15 05:32:08.984254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:08.984310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:19.149 [2024-05-15 05:32:08.984325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:19.149 #54 NEW cov: 12137 ft: 14507 corp: 26/1706b lim: 90 exec/s: 54 rss: 72Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:19.149 [2024-05-15 05:32:09.023689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.149 [2024-05-15 05:32:09.023718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:09.023768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.149 [2024-05-15 05:32:09.023784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.149 #55 NEW cov: 12137 ft: 14518 corp: 27/1744b lim: 90 exec/s: 55 rss: 72Mb L: 38/90 MS: 1 CMP- DE: "\377\377"- 00:07:19.149 [2024-05-15 05:32:09.064099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.149 [2024-05-15 05:32:09.064126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:09.064169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.149 [2024-05-15 05:32:09.064185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.149 [2024-05-15 05:32:09.064238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.150 [2024-05-15 05:32:09.064254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.064308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.150 [2024-05-15 05:32:09.064324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.150 #56 NEW cov: 12137 ft: 14545 corp: 28/1821b lim: 90 exec/s: 56 rss: 72Mb L: 77/90 MS: 1 CopyPart- 00:07:19.150 [2024-05-15 05:32:09.114261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.150 [2024-05-15 05:32:09.114288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.114336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.150 [2024-05-15 05:32:09.114352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.114408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.150 [2024-05-15 05:32:09.114423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.114479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.150 [2024-05-15 05:32:09.114494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.150 #57 NEW cov: 12137 ft: 14614 corp: 29/1897b lim: 90 exec/s: 57 rss: 72Mb L: 76/90 MS: 1 CMP- DE: "\001\000\000\000\000\000\000H"- 00:07:19.150 [2024-05-15 05:32:09.164454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.150 [2024-05-15 05:32:09.164483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.164518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.150 [2024-05-15 05:32:09.164533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.164588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.150 [2024-05-15 05:32:09.164605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.150 [2024-05-15 05:32:09.164659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.150 [2024-05-15 05:32:09.164674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.408 #58 NEW cov: 12137 ft: 14617 corp: 30/1974b lim: 90 exec/s: 58 rss: 72Mb L: 77/90 MS: 1 InsertByte- 00:07:19.408 [2024-05-15 05:32:09.204262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.408 [2024-05-15 05:32:09.204289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.204335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.408 [2024-05-15 05:32:09.204350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.408 #59 NEW cov: 12137 ft: 14632 corp: 31/2011b lim: 90 exec/s: 59 rss: 72Mb L: 37/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000H"- 00:07:19.408 [2024-05-15 05:32:09.244213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.408 [2024-05-15 05:32:09.244241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.408 #60 NEW cov: 12137 ft: 15446 corp: 32/2037b lim: 90 exec/s: 60 rss: 72Mb L: 26/90 MS: 1 EraseBytes- 00:07:19.408 [2024-05-15 05:32:09.294816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.408 [2024-05-15 05:32:09.294842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.294890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.408 [2024-05-15 05:32:09.294906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.294961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.408 [2024-05-15 05:32:09.294977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.295033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.408 [2024-05-15 05:32:09.295049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.408 #61 NEW cov: 12137 ft: 15459 corp: 33/2123b lim: 90 exec/s: 61 rss: 72Mb L: 86/90 MS: 1 InsertRepeatedBytes- 00:07:19.408 [2024-05-15 05:32:09.334887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.408 [2024-05-15 05:32:09.334915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.334962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.408 [2024-05-15 05:32:09.334978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.335035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.408 [2024-05-15 05:32:09.335052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.335106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.408 [2024-05-15 05:32:09.335122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.408 #62 NEW cov: 12137 ft: 15462 corp: 34/2195b lim: 90 exec/s: 62 rss: 72Mb L: 72/90 MS: 1 CrossOver- 00:07:19.408 [2024-05-15 05:32:09.385057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.408 [2024-05-15 05:32:09.385085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.385135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.408 [2024-05-15 05:32:09.385150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.385204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.408 [2024-05-15 05:32:09.385220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.408 [2024-05-15 05:32:09.385275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.408 [2024-05-15 05:32:09.385291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.408 #63 NEW cov: 12137 ft: 15473 corp: 35/2272b lim: 90 exec/s: 63 rss: 72Mb L: 77/90 MS: 1 ChangeBit- 00:07:19.668 [2024-05-15 05:32:09.434909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.668 [2024-05-15 05:32:09.434936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.668 [2024-05-15 05:32:09.434979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.668 [2024-05-15 05:32:09.434995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.668 #64 NEW cov: 12137 ft: 15501 corp: 36/2312b lim: 90 exec/s: 64 rss: 73Mb L: 40/90 MS: 1 EraseBytes- 00:07:19.668 [2024-05-15 05:32:09.474846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.669 [2024-05-15 05:32:09.474873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.669 #65 NEW cov: 12137 ft: 15504 corp: 37/2346b lim: 90 exec/s: 65 rss: 73Mb L: 34/90 MS: 1 EraseBytes- 00:07:19.669 [2024-05-15 05:32:09.525217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.669 [2024-05-15 05:32:09.525247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.669 [2024-05-15 05:32:09.525306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.669 [2024-05-15 05:32:09.525323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.669 #66 NEW cov: 12137 ft: 15536 corp: 38/2384b lim: 90 exec/s: 66 rss: 73Mb L: 38/90 MS: 1 ChangeBinInt- 00:07:19.669 [2024-05-15 05:32:09.565278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.669 [2024-05-15 05:32:09.565306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.669 [2024-05-15 05:32:09.565365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.669 [2024-05-15 05:32:09.565385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.669 #67 NEW cov: 12137 ft: 15577 corp: 39/2422b lim: 90 exec/s: 67 rss: 73Mb L: 38/90 MS: 1 CopyPart- 00:07:19.669 [2024-05-15 05:32:09.605725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:19.669 [2024-05-15 05:32:09.605751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:19.669 [2024-05-15 05:32:09.605796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:19.669 [2024-05-15 05:32:09.605812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:19.669 [2024-05-15 05:32:09.605866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:19.669 [2024-05-15 05:32:09.605881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:19.669 [2024-05-15 05:32:09.605938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:19.669 [2024-05-15 05:32:09.605954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:19.669 #68 NEW cov: 12137 ft: 15603 corp: 40/2499b lim: 90 exec/s: 34 rss: 73Mb L: 77/90 MS: 1 ChangeBit- 00:07:19.669 #68 DONE cov: 12137 ft: 15603 corp: 40/2499b lim: 90 exec/s: 34 rss: 73Mb 00:07:19.669 ###### Recommended dictionary. ###### 00:07:19.669 "\377\377" # Uses: 0 00:07:19.669 "\001\000\000\000\000\000\000H" # Uses: 1 00:07:19.669 ###### End of recommended dictionary. ###### 00:07:19.669 Done 68 runs in 2 second(s) 00:07:19.669 [2024-05-15 05:32:09.635881] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:19.929 05:32:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:19.929 [2024-05-15 05:32:09.803021] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:19.929 [2024-05-15 05:32:09.803106] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275526 ] 00:07:19.929 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.188 [2024-05-15 05:32:09.989625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.188 [2024-05-15 05:32:10.064033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.188 [2024-05-15 05:32:10.124147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.188 [2024-05-15 05:32:10.140080] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:20.188 [2024-05-15 05:32:10.140519] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:20.188 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.188 INFO: Seed: 2593666246 00:07:20.188 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:20.188 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:20.188 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:20.188 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.188 #2 INITED exec/s: 0 rss: 64Mb 00:07:20.188 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.188 This may also happen if the target rejected all inputs we tried so far 00:07:20.188 [2024-05-15 05:32:10.185775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.188 [2024-05-15 05:32:10.185807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.188 [2024-05-15 05:32:10.185865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.188 [2024-05-15 05:32:10.185882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.735 NEW_FUNC[1/687]: 0x4a71f0 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:20.735 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.735 #4 NEW cov: 11868 ft: 11869 corp: 2/25b lim: 50 exec/s: 0 rss: 70Mb L: 24/24 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:20.735 [2024-05-15 05:32:10.516369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.735 [2024-05-15 05:32:10.516407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.735 #6 NEW cov: 11998 ft: 13198 corp: 3/44b lim: 50 exec/s: 0 rss: 70Mb L: 19/24 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:20.735 [2024-05-15 05:32:10.556557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.735 [2024-05-15 05:32:10.556585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.735 [2024-05-15 05:32:10.556618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.735 [2024-05-15 05:32:10.556634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.735 #7 NEW cov: 12004 ft: 13408 corp: 4/68b lim: 50 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 ChangeBit- 00:07:20.735 [2024-05-15 05:32:10.606665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.735 [2024-05-15 05:32:10.606692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.735 [2024-05-15 05:32:10.606736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.735 [2024-05-15 05:32:10.606751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.735 #10 NEW cov: 12089 ft: 13750 corp: 5/94b lim: 50 exec/s: 0 rss: 70Mb L: 26/26 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:07:20.735 [2024-05-15 05:32:10.646848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.735 [2024-05-15 05:32:10.646876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.735 [2024-05-15 05:32:10.646906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.735 [2024-05-15 05:32:10.646921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.735 #11 NEW cov: 12089 ft: 13878 corp: 6/121b lim: 50 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 InsertByte- 00:07:20.735 [2024-05-15 05:32:10.696951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.735 [2024-05-15 05:32:10.696978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.735 [2024-05-15 05:32:10.697011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.735 [2024-05-15 05:32:10.697027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.735 #12 NEW cov: 12089 ft: 14011 corp: 7/147b lim: 50 exec/s: 0 rss: 71Mb L: 26/27 MS: 1 ShuffleBytes- 00:07:20.735 [2024-05-15 05:32:10.737115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.735 [2024-05-15 05:32:10.737142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.735 [2024-05-15 05:32:10.737175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.735 [2024-05-15 05:32:10.737190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.994 #13 NEW cov: 12089 ft: 14120 corp: 8/171b lim: 50 exec/s: 0 rss: 71Mb L: 24/27 MS: 1 CopyPart- 00:07:20.994 [2024-05-15 05:32:10.777546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.995 [2024-05-15 05:32:10.777573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.777615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.995 [2024-05-15 05:32:10.777630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.777682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:20.995 [2024-05-15 05:32:10.777697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.777750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:20.995 [2024-05-15 05:32:10.777763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:20.995 #14 NEW cov: 12089 ft: 14548 corp: 9/217b lim: 50 exec/s: 0 rss: 71Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:07:20.995 [2024-05-15 05:32:10.827355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.995 [2024-05-15 05:32:10.827387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.827421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.995 [2024-05-15 05:32:10.827435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.995 #15 NEW cov: 12089 ft: 14578 corp: 10/241b lim: 50 exec/s: 0 rss: 71Mb L: 24/46 MS: 1 EraseBytes- 00:07:20.995 [2024-05-15 05:32:10.877433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.995 [2024-05-15 05:32:10.877461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.877494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.995 [2024-05-15 05:32:10.877510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.995 #16 NEW cov: 12089 ft: 14658 corp: 11/265b lim: 50 exec/s: 0 rss: 71Mb L: 24/46 MS: 1 CrossOver- 00:07:20.995 [2024-05-15 05:32:10.927606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.995 [2024-05-15 05:32:10.927634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.927682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.995 [2024-05-15 05:32:10.927698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.995 #17 NEW cov: 12089 ft: 14683 corp: 12/289b lim: 50 exec/s: 0 rss: 71Mb L: 24/46 MS: 1 ChangeByte- 00:07:20.995 [2024-05-15 05:32:10.977750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:20.995 [2024-05-15 05:32:10.977777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:20.995 [2024-05-15 05:32:10.977810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:20.995 [2024-05-15 05:32:10.977825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:20.995 #18 NEW cov: 12089 ft: 14754 corp: 13/312b lim: 50 exec/s: 0 rss: 71Mb L: 23/46 MS: 1 EraseBytes- 00:07:21.254 [2024-05-15 05:32:11.027876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.254 [2024-05-15 05:32:11.027904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.027944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.254 [2024-05-15 05:32:11.027958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.254 #19 NEW cov: 12089 ft: 14787 corp: 14/336b lim: 50 exec/s: 0 rss: 72Mb L: 24/46 MS: 1 ChangeBit- 00:07:21.254 [2024-05-15 05:32:11.078036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.254 [2024-05-15 05:32:11.078064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.078109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.254 [2024-05-15 05:32:11.078124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.254 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:21.254 #20 NEW cov: 12112 ft: 14819 corp: 15/361b lim: 50 exec/s: 0 rss: 72Mb L: 25/46 MS: 1 InsertByte- 00:07:21.254 [2024-05-15 05:32:11.118385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.254 [2024-05-15 05:32:11.118412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.118465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.254 [2024-05-15 05:32:11.118480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.118531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.254 [2024-05-15 05:32:11.118546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.118598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.254 [2024-05-15 05:32:11.118613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.254 #21 NEW cov: 12112 ft: 14882 corp: 16/402b lim: 50 exec/s: 0 rss: 72Mb L: 41/46 MS: 1 CrossOver- 00:07:21.254 [2024-05-15 05:32:11.168680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.254 [2024-05-15 05:32:11.168709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.168753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.254 [2024-05-15 05:32:11.168768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.168818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.254 [2024-05-15 05:32:11.168834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.168885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.254 [2024-05-15 05:32:11.168900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.168952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:21.254 [2024-05-15 05:32:11.168967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:21.254 #22 NEW cov: 12112 ft: 14937 corp: 17/452b lim: 50 exec/s: 22 rss: 72Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:07:21.254 [2024-05-15 05:32:11.218409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.254 [2024-05-15 05:32:11.218437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.218471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.254 [2024-05-15 05:32:11.218486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.254 #23 NEW cov: 12112 ft: 14958 corp: 18/476b lim: 50 exec/s: 23 rss: 72Mb L: 24/50 MS: 1 EraseBytes- 00:07:21.254 [2024-05-15 05:32:11.258525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.254 [2024-05-15 05:32:11.258554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.254 [2024-05-15 05:32:11.258605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.254 [2024-05-15 05:32:11.258624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.513 #24 NEW cov: 12112 ft: 14992 corp: 19/502b lim: 50 exec/s: 24 rss: 72Mb L: 26/50 MS: 1 InsertByte- 00:07:21.513 [2024-05-15 05:32:11.308775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.513 [2024-05-15 05:32:11.308803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.513 [2024-05-15 05:32:11.308838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.513 [2024-05-15 05:32:11.308853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.513 [2024-05-15 05:32:11.308906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.513 [2024-05-15 05:32:11.308921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.513 #25 NEW cov: 12112 ft: 15243 corp: 20/533b lim: 50 exec/s: 25 rss: 72Mb L: 31/50 MS: 1 CopyPart- 00:07:21.513 [2024-05-15 05:32:11.348897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.513 [2024-05-15 05:32:11.348925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.513 [2024-05-15 05:32:11.348965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.513 [2024-05-15 05:32:11.348980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.513 [2024-05-15 05:32:11.349034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.513 [2024-05-15 05:32:11.349048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.513 #26 NEW cov: 12112 ft: 15247 corp: 21/564b lim: 50 exec/s: 26 rss: 72Mb L: 31/50 MS: 1 CMP- DE: "?\356\0019\323\316\205\000"- 00:07:21.513 [2024-05-15 05:32:11.398886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.513 [2024-05-15 05:32:11.398914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.514 [2024-05-15 05:32:11.398956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.514 [2024-05-15 05:32:11.398971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.514 #27 NEW cov: 12112 ft: 15267 corp: 22/588b lim: 50 exec/s: 27 rss: 72Mb L: 24/50 MS: 1 CopyPart- 00:07:21.514 [2024-05-15 05:32:11.449113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.514 [2024-05-15 05:32:11.449141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.514 [2024-05-15 05:32:11.449185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.514 [2024-05-15 05:32:11.449200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.514 #28 NEW cov: 12112 ft: 15298 corp: 23/613b lim: 50 exec/s: 28 rss: 72Mb L: 25/50 MS: 1 InsertByte- 00:07:21.514 [2024-05-15 05:32:11.489296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.514 [2024-05-15 05:32:11.489324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.514 [2024-05-15 05:32:11.489359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.514 [2024-05-15 05:32:11.489377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.514 [2024-05-15 05:32:11.489435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.514 [2024-05-15 05:32:11.489451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.514 #29 NEW cov: 12112 ft: 15343 corp: 24/652b lim: 50 exec/s: 29 rss: 72Mb L: 39/50 MS: 1 InsertRepeatedBytes- 00:07:21.514 [2024-05-15 05:32:11.529283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.514 [2024-05-15 05:32:11.529311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.514 [2024-05-15 05:32:11.529358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.514 [2024-05-15 05:32:11.529374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.773 #30 NEW cov: 12112 ft: 15374 corp: 25/676b lim: 50 exec/s: 30 rss: 72Mb L: 24/50 MS: 1 ChangeBit- 00:07:21.773 [2024-05-15 05:32:11.569408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.773 [2024-05-15 05:32:11.569436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.569475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.773 [2024-05-15 05:32:11.569490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.773 #31 NEW cov: 12112 ft: 15379 corp: 26/704b lim: 50 exec/s: 31 rss: 72Mb L: 28/50 MS: 1 CMP- DE: "\002\000"- 00:07:21.773 [2024-05-15 05:32:11.609507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.773 [2024-05-15 05:32:11.609534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.609568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.773 [2024-05-15 05:32:11.609584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.773 #32 NEW cov: 12112 ft: 15381 corp: 27/728b lim: 50 exec/s: 32 rss: 72Mb L: 24/50 MS: 1 CrossOver- 00:07:21.773 [2024-05-15 05:32:11.659926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.773 [2024-05-15 05:32:11.659955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.659989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.773 [2024-05-15 05:32:11.660005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.660058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:21.773 [2024-05-15 05:32:11.660074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.660126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:21.773 [2024-05-15 05:32:11.660141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:21.773 #33 NEW cov: 12112 ft: 15394 corp: 28/770b lim: 50 exec/s: 33 rss: 72Mb L: 42/50 MS: 1 CrossOver- 00:07:21.773 [2024-05-15 05:32:11.699758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.773 [2024-05-15 05:32:11.699790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.699835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.773 [2024-05-15 05:32:11.699851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.773 #34 NEW cov: 12112 ft: 15400 corp: 29/795b lim: 50 exec/s: 34 rss: 72Mb L: 25/50 MS: 1 InsertByte- 00:07:21.773 [2024-05-15 05:32:11.739885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.773 [2024-05-15 05:32:11.739913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.739946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.773 [2024-05-15 05:32:11.739962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.773 #35 NEW cov: 12112 ft: 15418 corp: 30/819b lim: 50 exec/s: 35 rss: 72Mb L: 24/50 MS: 1 InsertByte- 00:07:21.773 [2024-05-15 05:32:11.790061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:21.773 [2024-05-15 05:32:11.790088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.773 [2024-05-15 05:32:11.790119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:21.773 [2024-05-15 05:32:11.790134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.032 #36 NEW cov: 12112 ft: 15439 corp: 31/845b lim: 50 exec/s: 36 rss: 73Mb L: 26/50 MS: 1 ChangeByte- 00:07:22.032 [2024-05-15 05:32:11.840344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.032 [2024-05-15 05:32:11.840372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.032 [2024-05-15 05:32:11.840418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.032 [2024-05-15 05:32:11.840433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.032 [2024-05-15 05:32:11.840487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:22.032 [2024-05-15 05:32:11.840502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.032 #37 NEW cov: 12112 ft: 15444 corp: 32/879b lim: 50 exec/s: 37 rss: 73Mb L: 34/50 MS: 1 PersAutoDict- DE: "?\356\0019\323\316\205\000"- 00:07:22.032 [2024-05-15 05:32:11.880616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.032 [2024-05-15 05:32:11.880645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.032 [2024-05-15 05:32:11.880683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.032 [2024-05-15 05:32:11.880698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.032 [2024-05-15 05:32:11.880751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:22.032 [2024-05-15 05:32:11.880767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.032 [2024-05-15 05:32:11.880819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:22.032 [2024-05-15 05:32:11.880833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.032 #38 NEW cov: 12112 ft: 15457 corp: 33/924b lim: 50 exec/s: 38 rss: 73Mb L: 45/50 MS: 1 CrossOver- 00:07:22.032 [2024-05-15 05:32:11.920256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.032 [2024-05-15 05:32:11.920283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.032 #39 NEW cov: 12112 ft: 15474 corp: 34/943b lim: 50 exec/s: 39 rss: 73Mb L: 19/50 MS: 1 ChangeBit- 00:07:22.032 [2024-05-15 05:32:11.960383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.032 [2024-05-15 05:32:11.960412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.032 #40 NEW cov: 12112 ft: 15484 corp: 35/961b lim: 50 exec/s: 40 rss: 73Mb L: 18/50 MS: 1 EraseBytes- 00:07:22.032 [2024-05-15 05:32:12.000487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.032 [2024-05-15 05:32:12.000514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.032 #41 NEW cov: 12112 ft: 15501 corp: 36/974b lim: 50 exec/s: 41 rss: 73Mb L: 13/50 MS: 1 EraseBytes- 00:07:22.032 [2024-05-15 05:32:12.050775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.032 [2024-05-15 05:32:12.050804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.032 [2024-05-15 05:32:12.050837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.032 [2024-05-15 05:32:12.050853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.291 #42 NEW cov: 12112 ft: 15516 corp: 37/1000b lim: 50 exec/s: 42 rss: 73Mb L: 26/50 MS: 1 ChangeBit- 00:07:22.291 [2024-05-15 05:32:12.090784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.291 [2024-05-15 05:32:12.090811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.291 #43 NEW cov: 12112 ft: 15531 corp: 38/1019b lim: 50 exec/s: 43 rss: 73Mb L: 19/50 MS: 1 ChangeByte- 00:07:22.291 [2024-05-15 05:32:12.141176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:22.291 [2024-05-15 05:32:12.141204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.291 [2024-05-15 05:32:12.141240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:22.291 [2024-05-15 05:32:12.141256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.291 [2024-05-15 05:32:12.141307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:22.291 [2024-05-15 05:32:12.141323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.291 #44 NEW cov: 12112 ft: 15541 corp: 39/1057b lim: 50 exec/s: 22 rss: 73Mb L: 38/50 MS: 1 InsertRepeatedBytes- 00:07:22.291 #44 DONE cov: 12112 ft: 15541 corp: 39/1057b lim: 50 exec/s: 22 rss: 73Mb 00:07:22.291 ###### Recommended dictionary. ###### 00:07:22.291 "?\356\0019\323\316\205\000" # Uses: 1 00:07:22.291 "\002\000" # Uses: 0 00:07:22.291 ###### End of recommended dictionary. ###### 00:07:22.291 Done 44 runs in 2 second(s) 00:07:22.291 [2024-05-15 05:32:12.171073] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.291 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:22.550 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:22.550 05:32:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:22.551 [2024-05-15 05:32:12.337345] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:22.551 [2024-05-15 05:32:12.337425] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276006 ] 00:07:22.551 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.551 [2024-05-15 05:32:12.510334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.809 [2024-05-15 05:32:12.576271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.809 [2024-05-15 05:32:12.635191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.809 [2024-05-15 05:32:12.651143] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:22.809 [2024-05-15 05:32:12.651564] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:22.809 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.809 INFO: Seed: 808685616 00:07:22.809 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:22.809 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:22.809 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:22.809 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.809 #2 INITED exec/s: 0 rss: 63Mb 00:07:22.809 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.809 This may also happen if the target rejected all inputs we tried so far 00:07:22.809 [2024-05-15 05:32:12.700104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:22.809 [2024-05-15 05:32:12.700135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.809 [2024-05-15 05:32:12.700171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:22.810 [2024-05-15 05:32:12.700190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.810 [2024-05-15 05:32:12.700243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:22.810 [2024-05-15 05:32:12.700259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.810 [2024-05-15 05:32:12.700314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:22.810 [2024-05-15 05:32:12.700329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.068 NEW_FUNC[1/687]: 0x4a94b0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:23.068 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.068 #4 NEW cov: 11894 ft: 11895 corp: 2/71b lim: 85 exec/s: 0 rss: 70Mb L: 70/70 MS: 2 CMP-InsertRepeatedBytes- DE: "~\000\000\000"- 00:07:23.068 [2024-05-15 05:32:13.011000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.068 [2024-05-15 05:32:13.011044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.068 [2024-05-15 05:32:13.011114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.068 [2024-05-15 05:32:13.011130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.068 [2024-05-15 05:32:13.011185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.068 [2024-05-15 05:32:13.011201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.068 [2024-05-15 05:32:13.011258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.068 [2024-05-15 05:32:13.011273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.068 #5 NEW cov: 12024 ft: 12660 corp: 3/141b lim: 85 exec/s: 0 rss: 70Mb L: 70/70 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:23.068 [2024-05-15 05:32:13.061012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.068 [2024-05-15 05:32:13.061041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.068 [2024-05-15 05:32:13.061085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.068 [2024-05-15 05:32:13.061100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.068 [2024-05-15 05:32:13.061155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.068 [2024-05-15 05:32:13.061171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.068 [2024-05-15 05:32:13.061224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.069 [2024-05-15 05:32:13.061239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.069 #6 NEW cov: 12030 ft: 12850 corp: 4/215b lim: 85 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:23.326 [2024-05-15 05:32:13.101062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.326 [2024-05-15 05:32:13.101091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.101129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.326 [2024-05-15 05:32:13.101144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.101201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.326 [2024-05-15 05:32:13.101217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.101272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.326 [2024-05-15 05:32:13.101288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.326 #9 NEW cov: 12115 ft: 13090 corp: 5/284b lim: 85 exec/s: 0 rss: 70Mb L: 69/74 MS: 3 ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:07:23.326 [2024-05-15 05:32:13.141225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.326 [2024-05-15 05:32:13.141254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.141300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.326 [2024-05-15 05:32:13.141316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.141370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.326 [2024-05-15 05:32:13.141391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.141445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.326 [2024-05-15 05:32:13.141459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.326 #10 NEW cov: 12115 ft: 13239 corp: 6/358b lim: 85 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 ChangeBinInt- 00:07:23.326 [2024-05-15 05:32:13.190870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.326 [2024-05-15 05:32:13.190899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.326 #11 NEW cov: 12115 ft: 14207 corp: 7/376b lim: 85 exec/s: 0 rss: 70Mb L: 18/74 MS: 1 CrossOver- 00:07:23.326 [2024-05-15 05:32:13.241478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.326 [2024-05-15 05:32:13.241508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.241553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.326 [2024-05-15 05:32:13.241569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.241624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.326 [2024-05-15 05:32:13.241639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.241693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.326 [2024-05-15 05:32:13.241708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.326 #12 NEW cov: 12115 ft: 14251 corp: 8/454b lim: 85 exec/s: 0 rss: 70Mb L: 78/78 MS: 1 CrossOver- 00:07:23.326 [2024-05-15 05:32:13.281608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.326 [2024-05-15 05:32:13.281639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.281675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.326 [2024-05-15 05:32:13.281691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.281745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.326 [2024-05-15 05:32:13.281761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.326 [2024-05-15 05:32:13.281817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.326 [2024-05-15 05:32:13.281833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.326 #13 NEW cov: 12115 ft: 14272 corp: 9/528b lim: 85 exec/s: 0 rss: 71Mb L: 74/78 MS: 1 ShuffleBytes- 00:07:23.326 [2024-05-15 05:32:13.321273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.326 [2024-05-15 05:32:13.321300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 #14 NEW cov: 12115 ft: 14411 corp: 10/546b lim: 85 exec/s: 0 rss: 71Mb L: 18/78 MS: 1 ChangeByte- 00:07:23.585 [2024-05-15 05:32:13.371891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.585 [2024-05-15 05:32:13.371919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.371965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.585 [2024-05-15 05:32:13.371981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.372037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.585 [2024-05-15 05:32:13.372054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.372110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.585 [2024-05-15 05:32:13.372126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.585 #15 NEW cov: 12115 ft: 14455 corp: 11/615b lim: 85 exec/s: 0 rss: 71Mb L: 69/78 MS: 1 ChangeBinInt- 00:07:23.585 [2024-05-15 05:32:13.422006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.585 [2024-05-15 05:32:13.422034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.422084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.585 [2024-05-15 05:32:13.422101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.422156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.585 [2024-05-15 05:32:13.422172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.422230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.585 [2024-05-15 05:32:13.422247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.585 #16 NEW cov: 12115 ft: 14468 corp: 12/685b lim: 85 exec/s: 0 rss: 71Mb L: 70/78 MS: 1 InsertByte- 00:07:23.585 [2024-05-15 05:32:13.472117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.585 [2024-05-15 05:32:13.472145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.472194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.585 [2024-05-15 05:32:13.472210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.472264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.585 [2024-05-15 05:32:13.472280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.472334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.585 [2024-05-15 05:32:13.472350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.585 #22 NEW cov: 12115 ft: 14493 corp: 13/754b lim: 85 exec/s: 0 rss: 71Mb L: 69/78 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:23.585 [2024-05-15 05:32:13.512268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.585 [2024-05-15 05:32:13.512297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.512345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.585 [2024-05-15 05:32:13.512361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.512419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.585 [2024-05-15 05:32:13.512435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.512490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.585 [2024-05-15 05:32:13.512506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.585 #23 NEW cov: 12115 ft: 14591 corp: 14/827b lim: 85 exec/s: 0 rss: 71Mb L: 73/78 MS: 1 EraseBytes- 00:07:23.585 [2024-05-15 05:32:13.552370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.585 [2024-05-15 05:32:13.552403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.552452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.585 [2024-05-15 05:32:13.552468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.552521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.585 [2024-05-15 05:32:13.552537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.552591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.585 [2024-05-15 05:32:13.552606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.585 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:23.585 #24 NEW cov: 12138 ft: 14616 corp: 15/901b lim: 85 exec/s: 0 rss: 71Mb L: 74/78 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:23.585 [2024-05-15 05:32:13.602553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.585 [2024-05-15 05:32:13.602581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.602628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.585 [2024-05-15 05:32:13.602644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.585 [2024-05-15 05:32:13.602698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.586 [2024-05-15 05:32:13.602715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.586 [2024-05-15 05:32:13.602795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.586 [2024-05-15 05:32:13.602812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.844 #25 NEW cov: 12138 ft: 14648 corp: 16/980b lim: 85 exec/s: 0 rss: 71Mb L: 79/79 MS: 1 InsertRepeatedBytes- 00:07:23.844 [2024-05-15 05:32:13.652693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.845 [2024-05-15 05:32:13.652721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.652768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.845 [2024-05-15 05:32:13.652783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.652839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.845 [2024-05-15 05:32:13.652854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.652909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.845 [2024-05-15 05:32:13.652924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.845 #26 NEW cov: 12138 ft: 14662 corp: 17/1057b lim: 85 exec/s: 0 rss: 71Mb L: 77/79 MS: 1 EraseBytes- 00:07:23.845 [2024-05-15 05:32:13.702801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.845 [2024-05-15 05:32:13.702828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.702878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.845 [2024-05-15 05:32:13.702893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.702947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.845 [2024-05-15 05:32:13.702963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.703020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.845 [2024-05-15 05:32:13.703036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.845 #27 NEW cov: 12138 ft: 14735 corp: 18/1135b lim: 85 exec/s: 27 rss: 71Mb L: 78/79 MS: 1 ShuffleBytes- 00:07:23.845 [2024-05-15 05:32:13.742948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.845 [2024-05-15 05:32:13.742976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.743016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.845 [2024-05-15 05:32:13.743031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.743089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.845 [2024-05-15 05:32:13.743105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.743160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.845 [2024-05-15 05:32:13.743176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.845 #28 NEW cov: 12138 ft: 14746 corp: 19/1218b lim: 85 exec/s: 28 rss: 71Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:07:23.845 [2024-05-15 05:32:13.782583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.845 [2024-05-15 05:32:13.782611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.845 #29 NEW cov: 12138 ft: 14790 corp: 20/1236b lim: 85 exec/s: 29 rss: 72Mb L: 18/83 MS: 1 CrossOver- 00:07:23.845 [2024-05-15 05:32:13.833164] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:23.845 [2024-05-15 05:32:13.833192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.833241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:23.845 [2024-05-15 05:32:13.833257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.833313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:23.845 [2024-05-15 05:32:13.833329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.845 [2024-05-15 05:32:13.833386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:23.845 [2024-05-15 05:32:13.833402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.845 #30 NEW cov: 12138 ft: 14808 corp: 21/1314b lim: 85 exec/s: 30 rss: 72Mb L: 78/83 MS: 1 ChangeBinInt- 00:07:24.104 [2024-05-15 05:32:13.873298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.104 [2024-05-15 05:32:13.873326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:13.873376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.104 [2024-05-15 05:32:13.873396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:13.873450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.104 [2024-05-15 05:32:13.873467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:13.873522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.104 [2024-05-15 05:32:13.873537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.104 #31 NEW cov: 12138 ft: 14827 corp: 22/1387b lim: 85 exec/s: 31 rss: 72Mb L: 73/83 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:24.104 [2024-05-15 05:32:13.922976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.104 [2024-05-15 05:32:13.923006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.104 #32 NEW cov: 12138 ft: 14847 corp: 23/1411b lim: 85 exec/s: 32 rss: 72Mb L: 24/83 MS: 1 InsertRepeatedBytes- 00:07:24.104 [2024-05-15 05:32:13.973580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.104 [2024-05-15 05:32:13.973608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:13.973655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.104 [2024-05-15 05:32:13.973671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:13.973726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.104 [2024-05-15 05:32:13.973742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:13.973797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.104 [2024-05-15 05:32:13.973812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.104 #33 NEW cov: 12138 ft: 14926 corp: 24/1488b lim: 85 exec/s: 33 rss: 72Mb L: 77/83 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:24.104 [2024-05-15 05:32:14.013693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.104 [2024-05-15 05:32:14.013721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:14.013769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.104 [2024-05-15 05:32:14.013784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:14.013836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.104 [2024-05-15 05:32:14.013853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:14.013908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.104 [2024-05-15 05:32:14.013924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.104 #34 NEW cov: 12138 ft: 14928 corp: 25/1566b lim: 85 exec/s: 34 rss: 72Mb L: 78/83 MS: 1 ChangeByte- 00:07:24.104 [2024-05-15 05:32:14.063831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.104 [2024-05-15 05:32:14.063858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:14.063910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.104 [2024-05-15 05:32:14.063926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:14.063982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.104 [2024-05-15 05:32:14.063997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.104 [2024-05-15 05:32:14.064053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.104 [2024-05-15 05:32:14.064069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.104 #35 NEW cov: 12138 ft: 14946 corp: 26/1643b lim: 85 exec/s: 35 rss: 72Mb L: 77/83 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:24.104 [2024-05-15 05:32:14.113976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.104 [2024-05-15 05:32:14.114003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.105 [2024-05-15 05:32:14.114052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.105 [2024-05-15 05:32:14.114068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.105 [2024-05-15 05:32:14.114123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.105 [2024-05-15 05:32:14.114138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.105 [2024-05-15 05:32:14.114193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.105 [2024-05-15 05:32:14.114208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.364 #36 NEW cov: 12138 ft: 14985 corp: 27/1717b lim: 85 exec/s: 36 rss: 72Mb L: 74/83 MS: 1 ShuffleBytes- 00:07:24.364 [2024-05-15 05:32:14.154095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.364 [2024-05-15 05:32:14.154122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.154169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.364 [2024-05-15 05:32:14.154185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.154239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.364 [2024-05-15 05:32:14.154254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.154307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.364 [2024-05-15 05:32:14.154323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.364 #37 NEW cov: 12138 ft: 15002 corp: 28/1787b lim: 85 exec/s: 37 rss: 72Mb L: 70/83 MS: 1 ChangeBinInt- 00:07:24.364 [2024-05-15 05:32:14.194224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.364 [2024-05-15 05:32:14.194252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.194300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.364 [2024-05-15 05:32:14.194317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.194373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.364 [2024-05-15 05:32:14.194395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.194459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.364 [2024-05-15 05:32:14.194475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.364 #38 NEW cov: 12138 ft: 15021 corp: 29/1866b lim: 85 exec/s: 38 rss: 72Mb L: 79/83 MS: 1 ChangeBit- 00:07:24.364 [2024-05-15 05:32:14.244408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.364 [2024-05-15 05:32:14.244438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.244469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.364 [2024-05-15 05:32:14.244484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.244541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.364 [2024-05-15 05:32:14.244558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.244614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.364 [2024-05-15 05:32:14.244629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.364 #39 NEW cov: 12138 ft: 15030 corp: 30/1941b lim: 85 exec/s: 39 rss: 72Mb L: 75/83 MS: 1 InsertByte- 00:07:24.364 [2024-05-15 05:32:14.284485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.364 [2024-05-15 05:32:14.284513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.284562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.364 [2024-05-15 05:32:14.284578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.284634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.364 [2024-05-15 05:32:14.284648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.284702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.364 [2024-05-15 05:32:14.284718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.364 #45 NEW cov: 12138 ft: 15081 corp: 31/2024b lim: 85 exec/s: 45 rss: 72Mb L: 83/83 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:24.364 [2024-05-15 05:32:14.334675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.364 [2024-05-15 05:32:14.334705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.334755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.364 [2024-05-15 05:32:14.334772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.334828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.364 [2024-05-15 05:32:14.334845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.364 [2024-05-15 05:32:14.334899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.364 [2024-05-15 05:32:14.334914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.384776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.623 [2024-05-15 05:32:14.384805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.384852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.623 [2024-05-15 05:32:14.384871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.384925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.623 [2024-05-15 05:32:14.384942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.384999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.623 [2024-05-15 05:32:14.385013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.623 #47 NEW cov: 12138 ft: 15120 corp: 32/2095b lim: 85 exec/s: 47 rss: 73Mb L: 71/83 MS: 2 CrossOver-InsertByte- 00:07:24.623 [2024-05-15 05:32:14.424850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.623 [2024-05-15 05:32:14.424878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.424925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.623 [2024-05-15 05:32:14.424941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.424996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.623 [2024-05-15 05:32:14.425012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.623 [2024-05-15 05:32:14.425069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.623 [2024-05-15 05:32:14.425084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.623 #48 NEW cov: 12138 ft: 15129 corp: 33/2168b lim: 85 exec/s: 48 rss: 73Mb L: 73/83 MS: 1 PersAutoDict- DE: "~\000\000\000"- 00:07:24.623 [2024-05-15 05:32:14.474615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.624 [2024-05-15 05:32:14.474643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.624 #49 NEW cov: 12138 ft: 15153 corp: 34/2186b lim: 85 exec/s: 49 rss: 73Mb L: 18/83 MS: 1 ShuffleBytes- 00:07:24.624 [2024-05-15 05:32:14.515077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.624 [2024-05-15 05:32:14.515106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.515155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.624 [2024-05-15 05:32:14.515171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.515226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.624 [2024-05-15 05:32:14.515241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.515297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.624 [2024-05-15 05:32:14.515314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.624 #50 NEW cov: 12138 ft: 15159 corp: 35/2256b lim: 85 exec/s: 50 rss: 73Mb L: 70/83 MS: 1 InsertRepeatedBytes- 00:07:24.624 [2024-05-15 05:32:14.565220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.624 [2024-05-15 05:32:14.565252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.565290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.624 [2024-05-15 05:32:14.565306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.565361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.624 [2024-05-15 05:32:14.565377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.565439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.624 [2024-05-15 05:32:14.565455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.624 #51 NEW cov: 12138 ft: 15166 corp: 36/2327b lim: 85 exec/s: 51 rss: 73Mb L: 71/83 MS: 1 InsertByte- 00:07:24.624 [2024-05-15 05:32:14.615373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.624 [2024-05-15 05:32:14.615403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.615457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.624 [2024-05-15 05:32:14.615472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.615514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.624 [2024-05-15 05:32:14.615530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.624 [2024-05-15 05:32:14.615585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.624 [2024-05-15 05:32:14.615602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.624 #52 NEW cov: 12138 ft: 15180 corp: 37/2397b lim: 85 exec/s: 52 rss: 73Mb L: 70/83 MS: 1 InsertByte- 00:07:24.884 [2024-05-15 05:32:14.655503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.884 [2024-05-15 05:32:14.655532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.884 [2024-05-15 05:32:14.655579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.884 [2024-05-15 05:32:14.655593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.884 [2024-05-15 05:32:14.655648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.884 [2024-05-15 05:32:14.655664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.884 [2024-05-15 05:32:14.655720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.884 [2024-05-15 05:32:14.655736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.884 #53 NEW cov: 12138 ft: 15190 corp: 38/2467b lim: 85 exec/s: 53 rss: 73Mb L: 70/83 MS: 1 ShuffleBytes- 00:07:24.884 [2024-05-15 05:32:14.695664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:24.884 [2024-05-15 05:32:14.695692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.884 [2024-05-15 05:32:14.695739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:24.884 [2024-05-15 05:32:14.695759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.884 [2024-05-15 05:32:14.695813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:24.884 [2024-05-15 05:32:14.695829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:24.884 [2024-05-15 05:32:14.695887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:24.884 [2024-05-15 05:32:14.695904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:24.884 #54 NEW cov: 12138 ft: 15197 corp: 39/2551b lim: 85 exec/s: 27 rss: 73Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:07:24.884 #54 DONE cov: 12138 ft: 15197 corp: 39/2551b lim: 85 exec/s: 27 rss: 73Mb 00:07:24.884 ###### Recommended dictionary. ###### 00:07:24.884 "~\000\000\000" # Uses: 9 00:07:24.884 ###### End of recommended dictionary. ###### 00:07:24.884 Done 54 runs in 2 second(s) 00:07:24.884 [2024-05-15 05:32:14.725601] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.884 05:32:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:24.884 [2024-05-15 05:32:14.892861] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:24.884 [2024-05-15 05:32:14.892932] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276538 ] 00:07:25.144 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.144 [2024-05-15 05:32:15.066698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.144 [2024-05-15 05:32:15.131668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.403 [2024-05-15 05:32:15.190916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.403 [2024-05-15 05:32:15.206869] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:25.403 [2024-05-15 05:32:15.207288] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:25.403 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.403 INFO: Seed: 3365689340 00:07:25.403 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:25.403 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:25.403 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:25.403 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.403 #2 INITED exec/s: 0 rss: 64Mb 00:07:25.403 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.403 This may also happen if the target rejected all inputs we tried so far 00:07:25.403 [2024-05-15 05:32:15.272427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.403 [2024-05-15 05:32:15.272458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.663 NEW_FUNC[1/686]: 0x4ac6e0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:25.663 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:25.663 #3 NEW cov: 11827 ft: 11826 corp: 2/6b lim: 25 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377\000"- 00:07:25.663 [2024-05-15 05:32:15.603527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.663 [2024-05-15 05:32:15.603587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.663 [2024-05-15 05:32:15.603678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.663 [2024-05-15 05:32:15.603708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.663 #5 NEW cov: 11957 ft: 12892 corp: 3/20b lim: 25 exec/s: 0 rss: 70Mb L: 14/14 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:25.663 [2024-05-15 05:32:15.653267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.663 [2024-05-15 05:32:15.653297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.663 #6 NEW cov: 11963 ft: 12994 corp: 4/28b lim: 25 exec/s: 0 rss: 70Mb L: 8/14 MS: 1 CrossOver- 00:07:25.922 [2024-05-15 05:32:15.703689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.923 [2024-05-15 05:32:15.703716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.703752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.923 [2024-05-15 05:32:15.703767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.703823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.923 [2024-05-15 05:32:15.703840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.923 #9 NEW cov: 12048 ft: 13621 corp: 5/47b lim: 25 exec/s: 0 rss: 71Mb L: 19/19 MS: 3 CrossOver-EraseBytes-InsertRepeatedBytes- 00:07:25.923 [2024-05-15 05:32:15.743964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.923 [2024-05-15 05:32:15.743994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.744031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.923 [2024-05-15 05:32:15.744047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.744104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.923 [2024-05-15 05:32:15.744120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.744178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.923 [2024-05-15 05:32:15.744194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.923 #10 NEW cov: 12048 ft: 14114 corp: 6/67b lim: 25 exec/s: 0 rss: 71Mb L: 20/20 MS: 1 InsertByte- 00:07:25.923 [2024-05-15 05:32:15.793745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.923 [2024-05-15 05:32:15.793772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.923 #11 NEW cov: 12048 ft: 14197 corp: 7/75b lim: 25 exec/s: 0 rss: 71Mb L: 8/20 MS: 1 ChangeBit- 00:07:25.923 [2024-05-15 05:32:15.843844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.923 [2024-05-15 05:32:15.843873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.923 #12 NEW cov: 12048 ft: 14250 corp: 8/80b lim: 25 exec/s: 0 rss: 71Mb L: 5/20 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:25.923 [2024-05-15 05:32:15.884322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.923 [2024-05-15 05:32:15.884352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.884407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:25.923 [2024-05-15 05:32:15.884424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.884482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:25.923 [2024-05-15 05:32:15.884499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.923 [2024-05-15 05:32:15.884556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:25.923 [2024-05-15 05:32:15.884572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.923 #13 NEW cov: 12048 ft: 14310 corp: 9/102b lim: 25 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 CopyPart- 00:07:25.923 [2024-05-15 05:32:15.934114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:25.923 [2024-05-15 05:32:15.934142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.182 #14 NEW cov: 12048 ft: 14327 corp: 10/107b lim: 25 exec/s: 0 rss: 71Mb L: 5/22 MS: 1 CopyPart- 00:07:26.182 [2024-05-15 05:32:15.984543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.182 [2024-05-15 05:32:15.984570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:15.984617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.182 [2024-05-15 05:32:15.984632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:15.984691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.182 [2024-05-15 05:32:15.984708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.182 #15 NEW cov: 12048 ft: 14375 corp: 11/126b lim: 25 exec/s: 0 rss: 71Mb L: 19/22 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:26.182 [2024-05-15 05:32:16.024394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.182 [2024-05-15 05:32:16.024423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.182 #16 NEW cov: 12048 ft: 14481 corp: 12/132b lim: 25 exec/s: 0 rss: 71Mb L: 6/22 MS: 1 InsertByte- 00:07:26.182 [2024-05-15 05:32:16.074843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.182 [2024-05-15 05:32:16.074871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.074923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.182 [2024-05-15 05:32:16.074941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.074997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.182 [2024-05-15 05:32:16.075013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.075069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.182 [2024-05-15 05:32:16.075083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.182 #17 NEW cov: 12048 ft: 14495 corp: 13/152b lim: 25 exec/s: 0 rss: 72Mb L: 20/22 MS: 1 InsertByte- 00:07:26.182 [2024-05-15 05:32:16.115032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.182 [2024-05-15 05:32:16.115060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.115120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.182 [2024-05-15 05:32:16.115135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.115191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.182 [2024-05-15 05:32:16.115208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.115265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.182 [2024-05-15 05:32:16.115280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.115336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:26.182 [2024-05-15 05:32:16.115352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:26.182 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:26.182 #18 NEW cov: 12071 ft: 14574 corp: 14/177b lim: 25 exec/s: 0 rss: 72Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:26.182 [2024-05-15 05:32:16.155056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.182 [2024-05-15 05:32:16.155085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.155124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.182 [2024-05-15 05:32:16.155140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.155200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.182 [2024-05-15 05:32:16.155216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.182 [2024-05-15 05:32:16.155274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.182 [2024-05-15 05:32:16.155288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.182 #19 NEW cov: 12071 ft: 14645 corp: 15/197b lim: 25 exec/s: 0 rss: 72Mb L: 20/25 MS: 1 CopyPart- 00:07:26.441 [2024-05-15 05:32:16.204828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.441 [2024-05-15 05:32:16.204858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.441 #25 NEW cov: 12071 ft: 14654 corp: 16/202b lim: 25 exec/s: 0 rss: 72Mb L: 5/25 MS: 1 CopyPart- 00:07:26.441 [2024-05-15 05:32:16.244920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.441 [2024-05-15 05:32:16.244948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.441 #26 NEW cov: 12071 ft: 14672 corp: 17/208b lim: 25 exec/s: 26 rss: 72Mb L: 6/25 MS: 1 CrossOver- 00:07:26.441 [2024-05-15 05:32:16.295048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.441 [2024-05-15 05:32:16.295075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.441 #27 NEW cov: 12071 ft: 14694 corp: 18/216b lim: 25 exec/s: 27 rss: 72Mb L: 8/25 MS: 1 ChangeBit- 00:07:26.441 [2024-05-15 05:32:16.335576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.441 [2024-05-15 05:32:16.335604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.441 [2024-05-15 05:32:16.335652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.441 [2024-05-15 05:32:16.335667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.441 [2024-05-15 05:32:16.335723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.441 [2024-05-15 05:32:16.335739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.441 [2024-05-15 05:32:16.335797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.441 [2024-05-15 05:32:16.335812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.441 #28 NEW cov: 12071 ft: 14724 corp: 19/237b lim: 25 exec/s: 28 rss: 72Mb L: 21/25 MS: 1 InsertByte- 00:07:26.441 [2024-05-15 05:32:16.375552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.441 [2024-05-15 05:32:16.375579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.441 [2024-05-15 05:32:16.375626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.441 [2024-05-15 05:32:16.375642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.441 [2024-05-15 05:32:16.375704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.441 [2024-05-15 05:32:16.375718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.441 #29 NEW cov: 12071 ft: 14760 corp: 20/256b lim: 25 exec/s: 29 rss: 72Mb L: 19/25 MS: 1 CopyPart- 00:07:26.441 [2024-05-15 05:32:16.425456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.441 [2024-05-15 05:32:16.425484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.441 #30 NEW cov: 12071 ft: 14829 corp: 21/264b lim: 25 exec/s: 30 rss: 72Mb L: 8/25 MS: 1 ShuffleBytes- 00:07:26.699 [2024-05-15 05:32:16.475922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.699 [2024-05-15 05:32:16.475950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.699 [2024-05-15 05:32:16.476005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.699 [2024-05-15 05:32:16.476021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.699 [2024-05-15 05:32:16.476079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.699 [2024-05-15 05:32:16.476094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.699 [2024-05-15 05:32:16.476151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.699 [2024-05-15 05:32:16.476168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.699 #31 NEW cov: 12071 ft: 14856 corp: 22/288b lim: 25 exec/s: 31 rss: 72Mb L: 24/25 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:26.699 [2024-05-15 05:32:16.515668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.699 [2024-05-15 05:32:16.515695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.699 #32 NEW cov: 12071 ft: 14864 corp: 23/296b lim: 25 exec/s: 32 rss: 72Mb L: 8/25 MS: 1 ShuffleBytes- 00:07:26.699 [2024-05-15 05:32:16.555808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.699 [2024-05-15 05:32:16.555836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.699 #33 NEW cov: 12071 ft: 14867 corp: 24/301b lim: 25 exec/s: 33 rss: 72Mb L: 5/25 MS: 1 CopyPart- 00:07:26.699 [2024-05-15 05:32:16.595928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.699 [2024-05-15 05:32:16.595956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.699 #34 NEW cov: 12071 ft: 14877 corp: 25/307b lim: 25 exec/s: 34 rss: 72Mb L: 6/25 MS: 1 ChangeBinInt- 00:07:26.699 [2024-05-15 05:32:16.646053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.699 [2024-05-15 05:32:16.646080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.699 #35 NEW cov: 12071 ft: 14882 corp: 26/315b lim: 25 exec/s: 35 rss: 72Mb L: 8/25 MS: 1 ChangeBit- 00:07:26.699 [2024-05-15 05:32:16.696462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.699 [2024-05-15 05:32:16.696488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.699 [2024-05-15 05:32:16.696522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.699 [2024-05-15 05:32:16.696541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.699 [2024-05-15 05:32:16.696599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.699 [2024-05-15 05:32:16.696616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.959 #36 NEW cov: 12071 ft: 14890 corp: 27/334b lim: 25 exec/s: 36 rss: 73Mb L: 19/25 MS: 1 ShuffleBytes- 00:07:26.959 [2024-05-15 05:32:16.746672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.959 [2024-05-15 05:32:16.746699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.746757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.959 [2024-05-15 05:32:16.746771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.746825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.959 [2024-05-15 05:32:16.746840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.746898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.959 [2024-05-15 05:32:16.746915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.959 #37 NEW cov: 12071 ft: 14921 corp: 28/355b lim: 25 exec/s: 37 rss: 73Mb L: 21/25 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:26.959 [2024-05-15 05:32:16.796479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.959 [2024-05-15 05:32:16.796507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.959 #43 NEW cov: 12071 ft: 14935 corp: 29/363b lim: 25 exec/s: 43 rss: 73Mb L: 8/25 MS: 1 ChangeBit- 00:07:26.959 [2024-05-15 05:32:16.826553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.959 [2024-05-15 05:32:16.826580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.959 #44 NEW cov: 12071 ft: 14998 corp: 30/372b lim: 25 exec/s: 44 rss: 73Mb L: 9/25 MS: 1 InsertByte- 00:07:26.959 [2024-05-15 05:32:16.877074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.959 [2024-05-15 05:32:16.877102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.877155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.959 [2024-05-15 05:32:16.877171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.877226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.959 [2024-05-15 05:32:16.877243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.877299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:26.959 [2024-05-15 05:32:16.877315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.959 #45 NEW cov: 12071 ft: 15013 corp: 31/392b lim: 25 exec/s: 45 rss: 73Mb L: 20/25 MS: 1 CrossOver- 00:07:26.959 [2024-05-15 05:32:16.916820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.959 [2024-05-15 05:32:16.916851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.959 #46 NEW cov: 12071 ft: 15071 corp: 32/397b lim: 25 exec/s: 46 rss: 73Mb L: 5/25 MS: 1 ChangeBit- 00:07:26.959 [2024-05-15 05:32:16.957203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:26.959 [2024-05-15 05:32:16.957230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.957268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:26.959 [2024-05-15 05:32:16.957284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.959 [2024-05-15 05:32:16.957340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:26.959 [2024-05-15 05:32:16.957354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.959 #47 NEW cov: 12071 ft: 15081 corp: 33/416b lim: 25 exec/s: 47 rss: 73Mb L: 19/25 MS: 1 ChangeByte- 00:07:27.218 [2024-05-15 05:32:16.997027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.218 [2024-05-15 05:32:16.997054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.218 #48 NEW cov: 12071 ft: 15096 corp: 34/425b lim: 25 exec/s: 48 rss: 73Mb L: 9/25 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:27.218 [2024-05-15 05:32:17.037176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.218 [2024-05-15 05:32:17.037204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.218 #50 NEW cov: 12071 ft: 15128 corp: 35/430b lim: 25 exec/s: 50 rss: 73Mb L: 5/25 MS: 2 EraseBytes-InsertByte- 00:07:27.219 [2024-05-15 05:32:17.087325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.219 [2024-05-15 05:32:17.087352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.219 #51 NEW cov: 12071 ft: 15163 corp: 36/439b lim: 25 exec/s: 51 rss: 73Mb L: 9/25 MS: 1 InsertByte- 00:07:27.219 [2024-05-15 05:32:17.137616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.219 [2024-05-15 05:32:17.137643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.219 [2024-05-15 05:32:17.137678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.219 [2024-05-15 05:32:17.137693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.219 #52 NEW cov: 12071 ft: 15207 corp: 37/453b lim: 25 exec/s: 52 rss: 73Mb L: 14/25 MS: 1 CopyPart- 00:07:27.219 [2024-05-15 05:32:17.187744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.219 [2024-05-15 05:32:17.187771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.219 [2024-05-15 05:32:17.187806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.219 [2024-05-15 05:32:17.187820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.219 #53 NEW cov: 12071 ft: 15232 corp: 38/466b lim: 25 exec/s: 53 rss: 73Mb L: 13/25 MS: 1 PersAutoDict- DE: "\377\377\377\000"- 00:07:27.219 [2024-05-15 05:32:17.237913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:27.219 [2024-05-15 05:32:17.237940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.219 [2024-05-15 05:32:17.237993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:27.219 [2024-05-15 05:32:17.238010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.478 #54 NEW cov: 12071 ft: 15239 corp: 39/476b lim: 25 exec/s: 27 rss: 73Mb L: 10/25 MS: 1 EraseBytes- 00:07:27.478 #54 DONE cov: 12071 ft: 15239 corp: 39/476b lim: 25 exec/s: 27 rss: 73Mb 00:07:27.478 ###### Recommended dictionary. ###### 00:07:27.478 "\377\377\377\000" # Uses: 6 00:07:27.478 ###### End of recommended dictionary. ###### 00:07:27.478 Done 54 runs in 2 second(s) 00:07:27.478 [2024-05-15 05:32:17.260393] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:27.478 05:32:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:27.478 [2024-05-15 05:32:17.428869] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:27.478 [2024-05-15 05:32:17.428947] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276838 ] 00:07:27.478 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.738 [2024-05-15 05:32:17.609259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.738 [2024-05-15 05:32:17.676759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.738 [2024-05-15 05:32:17.735662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.738 [2024-05-15 05:32:17.751724] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:27.738 [2024-05-15 05:32:17.752145] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:27.997 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.997 INFO: Seed: 1614722533 00:07:27.997 INFO: Loaded 1 modules (352928 inline 8-bit counters): 352928 [0x291eb4c, 0x2974dec), 00:07:27.997 INFO: Loaded 1 PC tables (352928 PCs): 352928 [0x2974df0,0x2ed77f0), 00:07:27.997 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:27.997 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.997 #2 INITED exec/s: 0 rss: 64Mb 00:07:27.997 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.997 This may also happen if the target rejected all inputs we tried so far 00:07:27.997 [2024-05-15 05:32:17.810700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:27.997 [2024-05-15 05:32:17.810732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.255 NEW_FUNC[1/687]: 0x4ad7c0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:28.255 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.255 #4 NEW cov: 11898 ft: 11899 corp: 2/40b lim: 100 exec/s: 0 rss: 70Mb L: 39/39 MS: 2 CMP-InsertRepeatedBytes- DE: "\001\000\000\000\000\000\000\000"- 00:07:28.255 [2024-05-15 05:32:18.122027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.255 [2024-05-15 05:32:18.122114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.255 [2024-05-15 05:32:18.122236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.255 [2024-05-15 05:32:18.122282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.255 #9 NEW cov: 12029 ft: 13513 corp: 3/85b lim: 100 exec/s: 0 rss: 70Mb L: 45/45 MS: 5 ChangeByte-CrossOver-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:07:28.255 [2024-05-15 05:32:18.181574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.255 [2024-05-15 05:32:18.181604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.255 #10 NEW cov: 12035 ft: 13807 corp: 4/109b lim: 100 exec/s: 0 rss: 71Mb L: 24/45 MS: 1 EraseBytes- 00:07:28.255 [2024-05-15 05:32:18.231679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.255 [2024-05-15 05:32:18.231707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.256 #14 NEW cov: 12120 ft: 14048 corp: 5/133b lim: 100 exec/s: 0 rss: 71Mb L: 24/45 MS: 4 ChangeBit-InsertByte-CrossOver-InsertRepeatedBytes- 00:07:28.256 [2024-05-15 05:32:18.271846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.256 [2024-05-15 05:32:18.271874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.514 #15 NEW cov: 12120 ft: 14148 corp: 6/157b lim: 100 exec/s: 0 rss: 71Mb L: 24/45 MS: 1 ShuffleBytes- 00:07:28.514 [2024-05-15 05:32:18.321929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:553648128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.321956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.514 #16 NEW cov: 12120 ft: 14210 corp: 7/181b lim: 100 exec/s: 0 rss: 71Mb L: 24/45 MS: 1 ChangeBit- 00:07:28.514 [2024-05-15 05:32:18.362517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.362544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.362594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.362608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.362661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.362677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.362730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.362745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.514 #17 NEW cov: 12120 ft: 14660 corp: 8/271b lim: 100 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 CopyPart- 00:07:28.514 [2024-05-15 05:32:18.412171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.412198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.514 #18 NEW cov: 12120 ft: 14683 corp: 9/295b lim: 100 exec/s: 0 rss: 71Mb L: 24/90 MS: 1 CopyPart- 00:07:28.514 [2024-05-15 05:32:18.452752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.452779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.452832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.452847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.452899] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.452913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.452965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.452980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.514 #19 NEW cov: 12120 ft: 14726 corp: 10/385b lim: 100 exec/s: 0 rss: 71Mb L: 90/90 MS: 1 ChangeBit- 00:07:28.514 [2024-05-15 05:32:18.502898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11140386614647888538 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.502925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.502974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.502989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.503044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.503060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.514 [2024-05-15 05:32:18.503114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.514 [2024-05-15 05:32:18.503128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.514 #20 NEW cov: 12120 ft: 14769 corp: 11/473b lim: 100 exec/s: 0 rss: 71Mb L: 88/90 MS: 1 InsertRepeatedBytes- 00:07:28.773 [2024-05-15 05:32:18.542574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:553648128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.542602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.773 #21 NEW cov: 12120 ft: 14821 corp: 12/497b lim: 100 exec/s: 0 rss: 71Mb L: 24/90 MS: 1 ChangeBinInt- 00:07:28.773 [2024-05-15 05:32:18.592865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.592892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.773 [2024-05-15 05:32:18.592926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.592941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.773 #22 NEW cov: 12120 ft: 14859 corp: 13/542b lim: 100 exec/s: 0 rss: 72Mb L: 45/90 MS: 1 CrossOver- 00:07:28.773 [2024-05-15 05:32:18.642982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.643008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.773 [2024-05-15 05:32:18.643042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:19140298416324608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.643057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.773 #23 NEW cov: 12120 ft: 14911 corp: 14/587b lim: 100 exec/s: 0 rss: 72Mb L: 45/90 MS: 1 ChangeByte- 00:07:28.773 [2024-05-15 05:32:18.692996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.693024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.773 NEW_FUNC[1/1]: 0x1a1b710 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:28.773 #24 NEW cov: 12143 ft: 14948 corp: 15/611b lim: 100 exec/s: 0 rss: 72Mb L: 24/90 MS: 1 ShuffleBytes- 00:07:28.773 [2024-05-15 05:32:18.743595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:17241 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.743621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.773 [2024-05-15 05:32:18.743672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.743687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.773 [2024-05-15 05:32:18.743739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.743756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.773 [2024-05-15 05:32:18.743811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.743827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.773 #25 NEW cov: 12143 ft: 14971 corp: 16/701b lim: 100 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 CMP- DE: "CX\327\203\327\316\205\000"- 00:07:28.773 [2024-05-15 05:32:18.783272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:28.773 [2024-05-15 05:32:18.783298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.033 #26 NEW cov: 12143 ft: 15004 corp: 17/725b lim: 100 exec/s: 26 rss: 72Mb L: 24/90 MS: 1 ChangeByte- 00:07:29.033 [2024-05-15 05:32:18.823412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:52603866104987648 len:52870 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.033 [2024-05-15 05:32:18.823440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.033 #27 NEW cov: 12143 ft: 15063 corp: 18/764b lim: 100 exec/s: 27 rss: 72Mb L: 39/90 MS: 1 CMP- DE: "\272\342\360\217\327\316\205\000"- 00:07:29.033 [2024-05-15 05:32:18.863625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16843008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.033 [2024-05-15 05:32:18.863655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.033 [2024-05-15 05:32:18.863693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:10995116277760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.033 [2024-05-15 05:32:18.863709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.034 #28 NEW cov: 12143 ft: 15079 corp: 19/817b lim: 100 exec/s: 28 rss: 72Mb L: 53/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:29.034 [2024-05-15 05:32:18.903610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:9217 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.034 [2024-05-15 05:32:18.903638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.034 #29 NEW cov: 12143 ft: 15084 corp: 20/841b lim: 100 exec/s: 29 rss: 72Mb L: 24/90 MS: 1 ChangeByte- 00:07:29.034 [2024-05-15 05:32:18.953796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.034 [2024-05-15 05:32:18.953825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.034 #30 NEW cov: 12143 ft: 15113 corp: 21/872b lim: 100 exec/s: 30 rss: 72Mb L: 31/90 MS: 1 EraseBytes- 00:07:29.034 [2024-05-15 05:32:19.004029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.034 [2024-05-15 05:32:19.004057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.034 [2024-05-15 05:32:19.004102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15552854578472495401 len:54999 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.034 [2024-05-15 05:32:19.004117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.034 #31 NEW cov: 12143 ft: 15133 corp: 22/917b lim: 100 exec/s: 31 rss: 72Mb L: 45/90 MS: 1 ChangeBinInt- 00:07:29.034 [2024-05-15 05:32:19.044101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2377900603251621888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.034 [2024-05-15 05:32:19.044129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.034 [2024-05-15 05:32:19.044163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.034 [2024-05-15 05:32:19.044178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.322 #32 NEW cov: 12143 ft: 15138 corp: 23/957b lim: 100 exec/s: 32 rss: 73Mb L: 40/90 MS: 1 CrossOver- 00:07:29.322 [2024-05-15 05:32:19.094126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:553648128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.094155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.322 #33 NEW cov: 12143 ft: 15166 corp: 24/981b lim: 100 exec/s: 33 rss: 73Mb L: 24/90 MS: 1 ChangeBinInt- 00:07:29.322 [2024-05-15 05:32:19.144495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:22151168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.144525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.322 [2024-05-15 05:32:19.144561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.144577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.322 #34 NEW cov: 12143 ft: 15278 corp: 25/1026b lim: 100 exec/s: 34 rss: 73Mb L: 45/90 MS: 1 ChangeByte- 00:07:29.322 [2024-05-15 05:32:19.184806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.184836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.322 [2024-05-15 05:32:19.184874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.184890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.322 [2024-05-15 05:32:19.184944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.184961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.322 [2024-05-15 05:32:19.185012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:55172 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.185027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.322 #35 NEW cov: 12143 ft: 15305 corp: 26/1124b lim: 100 exec/s: 35 rss: 73Mb L: 98/98 MS: 1 PersAutoDict- DE: "CX\327\203\327\316\205\000"- 00:07:29.322 [2024-05-15 05:32:19.234496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.234524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.322 #36 NEW cov: 12143 ft: 15323 corp: 27/1148b lim: 100 exec/s: 36 rss: 73Mb L: 24/98 MS: 1 CopyPart- 00:07:29.322 [2024-05-15 05:32:19.274710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.274741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.322 [2024-05-15 05:32:19.274783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947085673277737 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.274799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.322 #37 NEW cov: 12143 ft: 15326 corp: 28/1188b lim: 100 exec/s: 37 rss: 73Mb L: 40/98 MS: 1 CrossOver- 00:07:29.322 [2024-05-15 05:32:19.324905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:201880240128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.324933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.322 [2024-05-15 05:32:19.324964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.322 [2024-05-15 05:32:19.324978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 #38 NEW cov: 12143 ft: 15340 corp: 29/1228b lim: 100 exec/s: 38 rss: 73Mb L: 40/98 MS: 1 InsertByte- 00:07:29.594 [2024-05-15 05:32:19.365170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16777216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.365198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.365243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.365259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.365311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.365327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 #39 NEW cov: 12143 ft: 15620 corp: 30/1300b lim: 100 exec/s: 39 rss: 73Mb L: 72/98 MS: 1 CrossOver- 00:07:29.594 [2024-05-15 05:32:19.405103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:22151168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.405131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.405165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.405180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 #40 NEW cov: 12143 ft: 15641 corp: 31/1345b lim: 100 exec/s: 40 rss: 73Mb L: 45/98 MS: 1 ChangeBinInt- 00:07:29.594 [2024-05-15 05:32:19.455525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11140386614647888538 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.455552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.455603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.455618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.455676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:11140386617062234778 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.455692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.455746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.455761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.594 #41 NEW cov: 12143 ft: 15646 corp: 32/1434b lim: 100 exec/s: 41 rss: 73Mb L: 89/98 MS: 1 InsertByte- 00:07:29.594 [2024-05-15 05:32:19.505660] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.505687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.505738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.505753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.505806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.594 [2024-05-15 05:32:19.505822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.594 [2024-05-15 05:32:19.505878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:22744 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.595 [2024-05-15 05:32:19.505893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.595 #42 NEW cov: 12143 ft: 15654 corp: 33/1533b lim: 100 exec/s: 42 rss: 74Mb L: 99/99 MS: 1 CopyPart- 00:07:29.595 [2024-05-15 05:32:19.555403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:553648128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.595 [2024-05-15 05:32:19.555432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.595 #43 NEW cov: 12143 ft: 15672 corp: 34/1557b lim: 100 exec/s: 43 rss: 74Mb L: 24/99 MS: 1 PersAutoDict- DE: "\272\342\360\217\327\316\205\000"- 00:07:29.595 [2024-05-15 05:32:19.595918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:50307712164036608 len:52870 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.595 [2024-05-15 05:32:19.595945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.595 [2024-05-15 05:32:19.595995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.595 [2024-05-15 05:32:19.596008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.595 [2024-05-15 05:32:19.596063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.595 [2024-05-15 05:32:19.596079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.595 [2024-05-15 05:32:19.596134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.595 [2024-05-15 05:32:19.596148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:29.854 #44 NEW cov: 12143 ft: 15686 corp: 35/1637b lim: 100 exec/s: 44 rss: 74Mb L: 80/99 MS: 1 CMP- DE: "\262\272\231\002\330\316\205\000"- 00:07:29.854 [2024-05-15 05:32:19.645644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:274878460592128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.854 [2024-05-15 05:32:19.645671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.854 #45 NEW cov: 12143 ft: 15691 corp: 36/1661b lim: 100 exec/s: 45 rss: 74Mb L: 24/99 MS: 1 ChangeBinInt- 00:07:29.854 [2024-05-15 05:32:19.695936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2965947086193371433 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.854 [2024-05-15 05:32:19.695963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.854 [2024-05-15 05:32:19.695997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.854 [2024-05-15 05:32:19.696012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.854 #46 NEW cov: 12143 ft: 15699 corp: 37/1707b lim: 100 exec/s: 46 rss: 74Mb L: 46/99 MS: 1 EraseBytes- 00:07:29.854 [2024-05-15 05:32:19.735984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.854 [2024-05-15 05:32:19.736011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.854 #47 NEW cov: 12143 ft: 15713 corp: 38/1738b lim: 100 exec/s: 47 rss: 74Mb L: 31/99 MS: 1 ChangeBit- 00:07:29.854 [2024-05-15 05:32:19.786202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:16842752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.854 [2024-05-15 05:32:19.786229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.854 [2024-05-15 05:32:19.786263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:19140298416389119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:29.854 [2024-05-15 05:32:19.786278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.854 #48 NEW cov: 12143 ft: 15724 corp: 39/1783b lim: 100 exec/s: 24 rss: 74Mb L: 45/99 MS: 1 ChangeBinInt- 00:07:29.854 #48 DONE cov: 12143 ft: 15724 corp: 39/1783b lim: 100 exec/s: 24 rss: 74Mb 00:07:29.854 ###### Recommended dictionary. ###### 00:07:29.854 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:29.854 "CX\327\203\327\316\205\000" # Uses: 1 00:07:29.854 "\272\342\360\217\327\316\205\000" # Uses: 1 00:07:29.854 "\262\272\231\002\330\316\205\000" # Uses: 0 00:07:29.854 ###### End of recommended dictionary. ###### 00:07:29.854 Done 48 runs in 2 second(s) 00:07:29.854 [2024-05-15 05:32:19.810767] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:30.114 05:32:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:30.114 05:32:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:30.114 05:32:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.114 05:32:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:30.114 00:07:30.114 real 1m4.021s 00:07:30.114 user 1m40.282s 00:07:30.114 sys 0m7.042s 00:07:30.114 05:32:19 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:30.114 05:32:19 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:30.114 ************************************ 00:07:30.114 END TEST nvmf_fuzz 00:07:30.114 ************************************ 00:07:30.114 05:32:19 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:30.114 05:32:19 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:30.114 05:32:19 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:30.114 05:32:19 llvm_fuzz -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:30.114 05:32:19 llvm_fuzz -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:30.114 05:32:19 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:30.114 ************************************ 00:07:30.114 START TEST vfio_fuzz 00:07:30.114 ************************************ 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:30.114 * Looking for test storage... 00:07:30.114 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:30.114 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:30.115 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:30.377 #define SPDK_CONFIG_H 00:07:30.377 #define SPDK_CONFIG_APPS 1 00:07:30.377 #define SPDK_CONFIG_ARCH native 00:07:30.377 #undef SPDK_CONFIG_ASAN 00:07:30.377 #undef SPDK_CONFIG_AVAHI 00:07:30.377 #undef SPDK_CONFIG_CET 00:07:30.377 #define SPDK_CONFIG_COVERAGE 1 00:07:30.377 #define SPDK_CONFIG_CROSS_PREFIX 00:07:30.377 #undef SPDK_CONFIG_CRYPTO 00:07:30.377 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:30.377 #undef SPDK_CONFIG_CUSTOMOCF 00:07:30.377 #undef SPDK_CONFIG_DAOS 00:07:30.377 #define SPDK_CONFIG_DAOS_DIR 00:07:30.377 #define SPDK_CONFIG_DEBUG 1 00:07:30.377 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:30.377 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:30.377 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:30.377 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:30.377 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:30.377 #undef SPDK_CONFIG_DPDK_UADK 00:07:30.377 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:30.377 #define SPDK_CONFIG_EXAMPLES 1 00:07:30.377 #undef SPDK_CONFIG_FC 00:07:30.377 #define SPDK_CONFIG_FC_PATH 00:07:30.377 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:30.377 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:30.377 #undef SPDK_CONFIG_FUSE 00:07:30.377 #define SPDK_CONFIG_FUZZER 1 00:07:30.377 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:30.377 #undef SPDK_CONFIG_GOLANG 00:07:30.377 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:30.377 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:30.377 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:30.377 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:30.377 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:30.377 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:30.377 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:30.377 #define SPDK_CONFIG_IDXD 1 00:07:30.377 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:30.377 #undef SPDK_CONFIG_IPSEC_MB 00:07:30.377 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:30.377 #define SPDK_CONFIG_ISAL 1 00:07:30.377 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:30.377 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:30.377 #define SPDK_CONFIG_LIBDIR 00:07:30.377 #undef SPDK_CONFIG_LTO 00:07:30.377 #define SPDK_CONFIG_MAX_LCORES 00:07:30.377 #define SPDK_CONFIG_NVME_CUSE 1 00:07:30.377 #undef SPDK_CONFIG_OCF 00:07:30.377 #define SPDK_CONFIG_OCF_PATH 00:07:30.377 #define SPDK_CONFIG_OPENSSL_PATH 00:07:30.377 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:30.377 #define SPDK_CONFIG_PGO_DIR 00:07:30.377 #undef SPDK_CONFIG_PGO_USE 00:07:30.377 #define SPDK_CONFIG_PREFIX /usr/local 00:07:30.377 #undef SPDK_CONFIG_RAID5F 00:07:30.377 #undef SPDK_CONFIG_RBD 00:07:30.377 #define SPDK_CONFIG_RDMA 1 00:07:30.377 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:30.377 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:30.377 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:30.377 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:30.377 #undef SPDK_CONFIG_SHARED 00:07:30.377 #undef SPDK_CONFIG_SMA 00:07:30.377 #define SPDK_CONFIG_TESTS 1 00:07:30.377 #undef SPDK_CONFIG_TSAN 00:07:30.377 #define SPDK_CONFIG_UBLK 1 00:07:30.377 #define SPDK_CONFIG_UBSAN 1 00:07:30.377 #undef SPDK_CONFIG_UNIT_TESTS 00:07:30.377 #undef SPDK_CONFIG_URING 00:07:30.377 #define SPDK_CONFIG_URING_PATH 00:07:30.377 #undef SPDK_CONFIG_URING_ZNS 00:07:30.377 #undef SPDK_CONFIG_USDT 00:07:30.377 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:30.377 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:30.377 #define SPDK_CONFIG_VFIO_USER 1 00:07:30.377 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:30.377 #define SPDK_CONFIG_VHOST 1 00:07:30.377 #define SPDK_CONFIG_VIRTIO 1 00:07:30.377 #undef SPDK_CONFIG_VTUNE 00:07:30.377 #define SPDK_CONFIG_VTUNE_DIR 00:07:30.377 #define SPDK_CONFIG_WERROR 1 00:07:30.377 #define SPDK_CONFIG_WPDK_DIR 00:07:30.377 #undef SPDK_CONFIG_XNVME 00:07:30.377 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.377 05:32:20 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # : 1 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # : 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # : 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # : 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:30.378 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # : 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # : 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3277395 ]] 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # kill -0 3277395 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.wjusxV 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.wjusxV/tests/vfio /tmp/spdk.wjusxV 00:07:30.379 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=968232960 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4316196864 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=52485550080 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=9256755200 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866440192 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342489088 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5971968 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30869630976 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=1523712 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:30.380 * Looking for test storage... 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # target_space=52485550080 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # new_size=11471347712 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:30.380 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # set -o errtrace 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1684 -- # true 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1686 -- # xtrace_fd 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:30.380 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:30.380 05:32:20 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:30.380 [2024-05-15 05:32:20.286799] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:30.380 [2024-05-15 05:32:20.286871] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277445 ] 00:07:30.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.380 [2024-05-15 05:32:20.360334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.640 [2024-05-15 05:32:20.441508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.640 [2024-05-15 05:32:20.611538] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:30.640 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.640 INFO: Seed: 179760777 00:07:30.640 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:30.640 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:30.640 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:30.640 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.640 #2 INITED exec/s: 0 rss: 64Mb 00:07:30.640 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.640 This may also happen if the target rejected all inputs we tried so far 00:07:30.899 [2024-05-15 05:32:20.680393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:31.159 NEW_FUNC[1/646]: 0x481740 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:31.159 NEW_FUNC[2/646]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:31.159 #29 NEW cov: 10921 ft: 10886 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:31.425 #30 NEW cov: 10935 ft: 13772 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 CopyPart- 00:07:31.684 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:31.684 #36 NEW cov: 10952 ft: 14922 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:31.684 #37 NEW cov: 10952 ft: 15394 corp: 5/25b lim: 6 exec/s: 37 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:07:31.943 #38 NEW cov: 10952 ft: 15530 corp: 6/31b lim: 6 exec/s: 38 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:07:32.202 #39 NEW cov: 10952 ft: 15869 corp: 7/37b lim: 6 exec/s: 39 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:32.462 #40 NEW cov: 10952 ft: 15909 corp: 8/43b lim: 6 exec/s: 40 rss: 73Mb L: 6/6 MS: 1 ChangeBit- 00:07:32.462 #41 NEW cov: 10952 ft: 16012 corp: 9/49b lim: 6 exec/s: 41 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:32.721 #42 NEW cov: 10959 ft: 16087 corp: 10/55b lim: 6 exec/s: 42 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:32.980 #44 NEW cov: 10959 ft: 17048 corp: 11/61b lim: 6 exec/s: 22 rss: 73Mb L: 6/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:32.980 #44 DONE cov: 10959 ft: 17048 corp: 11/61b lim: 6 exec/s: 22 rss: 73Mb 00:07:32.980 Done 44 runs in 2 second(s) 00:07:32.980 [2024-05-15 05:32:22.802569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:32.980 [2024-05-15 05:32:22.851718] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:33.249 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:33.250 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:33.250 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:33.251 05:32:23 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:33.251 [2024-05-15 05:32:23.085161] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:33.251 [2024-05-15 05:32:23.085248] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277978 ] 00:07:33.251 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.251 [2024-05-15 05:32:23.156680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.251 [2024-05-15 05:32:23.227336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.516 [2024-05-15 05:32:23.391105] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:33.516 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.516 INFO: Seed: 2959752519 00:07:33.516 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:33.516 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:33.516 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:33.516 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.516 #2 INITED exec/s: 0 rss: 64Mb 00:07:33.516 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.516 This may also happen if the target rejected all inputs we tried so far 00:07:33.516 [2024-05-15 05:32:23.459596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:33.516 [2024-05-15 05:32:23.511463] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:33.516 [2024-05-15 05:32:23.511487] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:33.516 [2024-05-15 05:32:23.511505] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.036 NEW_FUNC[1/648]: 0x481ce0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:34.036 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:34.036 #27 NEW cov: 10914 ft: 10889 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 5 ChangeBinInt-ChangeByte-ChangeByte-CMP-CrossOver- DE: "\010\000"- 00:07:34.036 [2024-05-15 05:32:23.986367] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.036 [2024-05-15 05:32:23.986409] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.036 [2024-05-15 05:32:23.986426] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.295 #28 NEW cov: 10931 ft: 13346 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:07:34.295 [2024-05-15 05:32:24.176423] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.295 [2024-05-15 05:32:24.176446] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.295 [2024-05-15 05:32:24.176463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.295 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:34.295 #29 NEW cov: 10948 ft: 14499 corp: 4/13b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:07:34.555 [2024-05-15 05:32:24.366522] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.555 [2024-05-15 05:32:24.366545] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.555 [2024-05-15 05:32:24.366561] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.555 #30 NEW cov: 10948 ft: 14846 corp: 5/17b lim: 4 exec/s: 30 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:07:34.555 [2024-05-15 05:32:24.557165] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.555 [2024-05-15 05:32:24.557187] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.555 [2024-05-15 05:32:24.557205] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:34.814 #31 NEW cov: 10948 ft: 15175 corp: 6/21b lim: 4 exec/s: 31 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:34.814 [2024-05-15 05:32:24.748939] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:34.814 [2024-05-15 05:32:24.748961] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:34.814 [2024-05-15 05:32:24.748978] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.074 #34 NEW cov: 10948 ft: 15264 corp: 7/25b lim: 4 exec/s: 34 rss: 73Mb L: 4/4 MS: 3 ChangeByte-PersAutoDict-InsertByte- DE: "\010\000"- 00:07:35.074 [2024-05-15 05:32:24.945933] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:35.074 [2024-05-15 05:32:24.945959] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:35.074 [2024-05-15 05:32:24.945975] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.074 #35 NEW cov: 10948 ft: 15406 corp: 8/29b lim: 4 exec/s: 35 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:35.333 [2024-05-15 05:32:25.133728] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:35.333 [2024-05-15 05:32:25.133750] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:35.333 [2024-05-15 05:32:25.133766] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.333 #41 NEW cov: 10955 ft: 15517 corp: 9/33b lim: 4 exec/s: 41 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:07:35.333 [2024-05-15 05:32:25.326325] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:35.333 [2024-05-15 05:32:25.326346] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:35.333 [2024-05-15 05:32:25.326363] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:35.593 #42 NEW cov: 10955 ft: 15586 corp: 10/37b lim: 4 exec/s: 21 rss: 74Mb L: 4/4 MS: 1 PersAutoDict- DE: "\010\000"- 00:07:35.593 #42 DONE cov: 10955 ft: 15586 corp: 10/37b lim: 4 exec/s: 21 rss: 74Mb 00:07:35.593 ###### Recommended dictionary. ###### 00:07:35.593 "\010\000" # Uses: 2 00:07:35.593 ###### End of recommended dictionary. ###### 00:07:35.593 Done 42 runs in 2 second(s) 00:07:35.593 [2024-05-15 05:32:25.455571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:35.593 [2024-05-15 05:32:25.500762] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:35.853 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:35.854 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:35.854 05:32:25 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:35.854 [2024-05-15 05:32:25.733197] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:35.854 [2024-05-15 05:32:25.733287] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278516 ] 00:07:35.854 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.854 [2024-05-15 05:32:25.806399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.113 [2024-05-15 05:32:25.878543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.113 [2024-05-15 05:32:26.042993] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:36.113 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.113 INFO: Seed: 1315792005 00:07:36.113 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:36.113 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:36.113 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:36.113 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.113 #2 INITED exec/s: 0 rss: 63Mb 00:07:36.113 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.113 This may also happen if the target rejected all inputs we tried so far 00:07:36.113 [2024-05-15 05:32:26.115371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:36.113 [2024-05-15 05:32:26.131509] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.631 NEW_FUNC[1/647]: 0x4826c0 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:36.631 NEW_FUNC[2/647]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:36.631 #8 NEW cov: 10896 ft: 10841 corp: 2/9b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:07:36.631 [2024-05-15 05:32:26.555125] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.631 #14 NEW cov: 10910 ft: 13587 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 CrossOver- 00:07:36.890 [2024-05-15 05:32:26.667322] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.890 #15 NEW cov: 10910 ft: 14949 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:36.890 [2024-05-15 05:32:26.791192] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:36.890 #16 NEW cov: 10910 ft: 15231 corp: 5/33b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:36.890 [2024-05-15 05:32:26.902843] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.149 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:37.149 #17 NEW cov: 10927 ft: 15954 corp: 6/41b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:37.149 [2024-05-15 05:32:27.014976] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.149 #18 NEW cov: 10927 ft: 16386 corp: 7/49b lim: 8 exec/s: 18 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:37.149 [2024-05-15 05:32:27.135906] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.408 #21 NEW cov: 10927 ft: 16566 corp: 8/57b lim: 8 exec/s: 21 rss: 73Mb L: 8/8 MS: 3 CrossOver-ChangeByte-CrossOver- 00:07:37.408 [2024-05-15 05:32:27.248608] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.408 #22 NEW cov: 10927 ft: 16774 corp: 9/65b lim: 8 exec/s: 22 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:07:37.408 [2024-05-15 05:32:27.359781] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.667 #23 NEW cov: 10927 ft: 16985 corp: 10/73b lim: 8 exec/s: 23 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:37.667 [2024-05-15 05:32:27.481450] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.667 #24 NEW cov: 10927 ft: 17129 corp: 11/81b lim: 8 exec/s: 24 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:07:37.667 [2024-05-15 05:32:27.593244] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.667 #25 NEW cov: 10927 ft: 17306 corp: 12/89b lim: 8 exec/s: 25 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:37.925 [2024-05-15 05:32:27.705320] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.925 #26 NEW cov: 10927 ft: 17354 corp: 13/97b lim: 8 exec/s: 26 rss: 73Mb L: 8/8 MS: 1 CopyPart- 00:07:37.925 [2024-05-15 05:32:27.817217] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:37.926 #27 NEW cov: 10934 ft: 17709 corp: 14/105b lim: 8 exec/s: 27 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:37.926 [2024-05-15 05:32:27.928055] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:38.185 #28 NEW cov: 10934 ft: 17908 corp: 15/113b lim: 8 exec/s: 28 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:07:38.185 [2024-05-15 05:32:28.040903] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:38.185 #29 NEW cov: 10934 ft: 17970 corp: 16/121b lim: 8 exec/s: 14 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:38.185 #29 DONE cov: 10934 ft: 17970 corp: 16/121b lim: 8 exec/s: 14 rss: 74Mb 00:07:38.185 Done 29 runs in 2 second(s) 00:07:38.185 [2024-05-15 05:32:28.131557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:38.185 [2024-05-15 05:32:28.180951] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:38.445 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:38.445 05:32:28 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:38.445 [2024-05-15 05:32:28.413780] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:38.445 [2024-05-15 05:32:28.413853] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278901 ] 00:07:38.445 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.704 [2024-05-15 05:32:28.486789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.704 [2024-05-15 05:32:28.561522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.964 [2024-05-15 05:32:28.731996] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:38.964 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.964 INFO: Seed: 4004782046 00:07:38.964 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:38.964 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:38.964 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:38.964 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.964 #2 INITED exec/s: 0 rss: 63Mb 00:07:38.964 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.964 This may also happen if the target rejected all inputs we tried so far 00:07:38.964 [2024-05-15 05:32:28.800434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:38.964 [2024-05-15 05:32:28.853411] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 223200860438528 > max 8796093022208 00:07:38.964 [2024-05-15 05:32:28.853451] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xcb0000000000) offset=0 flags=0x3: No space left on device 00:07:38.964 [2024-05-15 05:32:28.853463] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:38.964 [2024-05-15 05:32:28.853488] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.532 NEW_FUNC[1/648]: 0x482da0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:39.532 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:39.532 #86 NEW cov: 10912 ft: 10857 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 4 ShuffleBytes-InsertRepeatedBytes-ChangeBinInt-InsertByte- 00:07:39.532 #110 NEW cov: 10933 ft: 13283 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 CrossOver-ChangeBit-InsertByte-InsertRepeatedBytes- 00:07:39.532 [2024-05-15 05:32:29.515320] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:07:39.532 [2024-05-15 05:32:29.515351] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:07:39.532 [2024-05-15 05:32:29.515362] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:39.532 [2024-05-15 05:32:29.515381] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:39.791 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:39.791 #111 NEW cov: 10950 ft: 14722 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:39.791 #112 NEW cov: 10950 ft: 16031 corp: 5/129b lim: 32 exec/s: 112 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:40.050 [2024-05-15 05:32:29.865303] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 223200860438528 > max 8796093022208 00:07:40.050 [2024-05-15 05:32:29.865328] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x164200000000, 0xe14200000000) offset=0 flags=0x3: No space left on device 00:07:40.050 [2024-05-15 05:32:29.865340] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.050 [2024-05-15 05:32:29.865355] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.050 #113 NEW cov: 10950 ft: 16203 corp: 6/161b lim: 32 exec/s: 113 rss: 73Mb L: 32/32 MS: 1 CMP- DE: "B\026\000\000\000\000\000\000"- 00:07:40.050 [2024-05-15 05:32:30.041582] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 223200860438528 > max 8796093022208 00:07:40.050 [2024-05-15 05:32:30.041610] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xcb0000000000) offset=0 flags=0x3: No space left on device 00:07:40.050 [2024-05-15 05:32:30.041623] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.050 [2024-05-15 05:32:30.041643] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.308 #124 NEW cov: 10950 ft: 16706 corp: 7/193b lim: 32 exec/s: 124 rss: 73Mb L: 32/32 MS: 1 CrossOver- 00:07:40.309 #135 NEW cov: 10950 ft: 17023 corp: 8/225b lim: 32 exec/s: 135 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:40.567 [2024-05-15 05:32:30.403175] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 223200860438528 > max 8796093022208 00:07:40.567 [2024-05-15 05:32:30.403204] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xcb0000000000) offset=0 flags=0x3: No space left on device 00:07:40.567 [2024-05-15 05:32:30.403215] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.567 [2024-05-15 05:32:30.403233] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.567 #136 NEW cov: 10950 ft: 17257 corp: 9/257b lim: 32 exec/s: 136 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:40.567 [2024-05-15 05:32:30.577378] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 223200860438528 > max 8796093022208 00:07:40.567 [2024-05-15 05:32:30.577406] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xcb0000000000) offset=0 flags=0x3: No space left on device 00:07:40.567 [2024-05-15 05:32:30.577418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.567 [2024-05-15 05:32:30.577434] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:40.826 #137 NEW cov: 10957 ft: 17624 corp: 10/289b lim: 32 exec/s: 137 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:40.826 [2024-05-15 05:32:30.750866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 223200860438528 > max 8796093022208 00:07:40.826 [2024-05-15 05:32:30.750889] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0, 0xcb0000000000) offset=0 flags=0x3: No space left on device 00:07:40.826 [2024-05-15 05:32:30.750900] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:07:40.826 [2024-05-15 05:32:30.750916] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:41.085 #138 NEW cov: 10957 ft: 17730 corp: 11/321b lim: 32 exec/s: 69 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:41.085 #138 DONE cov: 10957 ft: 17730 corp: 11/321b lim: 32 exec/s: 69 rss: 73Mb 00:07:41.085 ###### Recommended dictionary. ###### 00:07:41.085 "B\026\000\000\000\000\000\000" # Uses: 1 00:07:41.085 ###### End of recommended dictionary. ###### 00:07:41.085 Done 138 runs in 2 second(s) 00:07:41.085 [2024-05-15 05:32:30.874568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:41.085 [2024-05-15 05:32:30.928320] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:41.344 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:41.344 05:32:31 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:41.344 [2024-05-15 05:32:31.166242] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:41.344 [2024-05-15 05:32:31.166314] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279344 ] 00:07:41.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.344 [2024-05-15 05:32:31.238881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.344 [2024-05-15 05:32:31.310404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.603 [2024-05-15 05:32:31.477009] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:41.603 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.603 INFO: Seed: 2454830567 00:07:41.603 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:41.603 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:41.603 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:41.603 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.603 #2 INITED exec/s: 0 rss: 63Mb 00:07:41.604 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.604 This may also happen if the target rejected all inputs we tried so far 00:07:41.604 [2024-05-15 05:32:31.544495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:41.604 [2024-05-15 05:32:31.598428] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=323 offset=0 prot=0x3: Invalid argument 00:07:41.604 [2024-05-15 05:32:31.598467] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:07:41.604 [2024-05-15 05:32:31.598477] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:41.604 [2024-05-15 05:32:31.598503] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:41.604 [2024-05-15 05:32:31.599422] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:41.604 [2024-05-15 05:32:31.599438] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:41.604 [2024-05-15 05:32:31.599453] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:42.121 NEW_FUNC[1/648]: 0x483620 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:42.121 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:42.121 #330 NEW cov: 10913 ft: 10882 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 3 ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:42.121 [2024-05-15 05:32:32.080076] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:42.121 [2024-05-15 05:32:32.080113] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x2b flags=0x3: Invalid argument 00:07:42.121 [2024-05-15 05:32:32.080124] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:42.121 [2024-05-15 05:32:32.080140] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:42.121 [2024-05-15 05:32:32.081076] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:07:42.121 [2024-05-15 05:32:32.081094] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:42.121 [2024-05-15 05:32:32.081111] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:42.379 #331 NEW cov: 10933 ft: 14472 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:07:42.379 [2024-05-15 05:32:32.270842] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x200000000, 0x200000000) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:42.379 [2024-05-15 05:32:32.270866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000000, 0x200000000) offset=0x2b flags=0x3: Invalid argument 00:07:42.379 [2024-05-15 05:32:32.270877] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:42.379 [2024-05-15 05:32:32.270893] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:42.379 [2024-05-15 05:32:32.271848] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000000, 0x200000000) flags=0: No such file or directory 00:07:42.379 [2024-05-15 05:32:32.271868] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:42.379 [2024-05-15 05:32:32.271885] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:42.379 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:42.379 #332 NEW cov: 10950 ft: 15475 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.638 [2024-05-15 05:32:32.449282] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x200000100, 0x200000100) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:42.638 [2024-05-15 05:32:32.449307] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000100, 0x200000100) offset=0x2b flags=0x3: Invalid argument 00:07:42.638 [2024-05-15 05:32:32.449318] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:42.638 [2024-05-15 05:32:32.449335] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:42.638 [2024-05-15 05:32:32.450293] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000100, 0x200000100) flags=0: No such file or directory 00:07:42.638 [2024-05-15 05:32:32.450313] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:42.638 [2024-05-15 05:32:32.450330] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:42.638 #333 NEW cov: 10950 ft: 15763 corp: 5/129b lim: 32 exec/s: 333 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.638 [2024-05-15 05:32:32.628470] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 35184372088832 > max 8796093022208 00:07:42.638 [2024-05-15 05:32:32.628493] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000000, 0x200200000000) offset=0x2b flags=0x3: No space left on device 00:07:42.638 [2024-05-15 05:32:32.628504] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:07:42.638 [2024-05-15 05:32:32.628521] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:42.638 [2024-05-15 05:32:32.629506] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000000, 0x200200000000) flags=0: No such file or directory 00:07:42.638 [2024-05-15 05:32:32.629526] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:42.638 [2024-05-15 05:32:32.629542] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:42.897 #334 NEW cov: 10950 ft: 15864 corp: 6/161b lim: 32 exec/s: 334 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:42.897 [2024-05-15 05:32:32.809143] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x200000100, 0x200000100) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:42.897 [2024-05-15 05:32:32.809167] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000100, 0x200000100) offset=0x2b flags=0x3: Invalid argument 00:07:42.897 [2024-05-15 05:32:32.809178] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:42.897 [2024-05-15 05:32:32.809195] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:42.897 [2024-05-15 05:32:32.810164] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000100, 0x200000100) flags=0: No such file or directory 00:07:42.897 [2024-05-15 05:32:32.810185] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:42.897 [2024-05-15 05:32:32.810201] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:43.155 #340 NEW cov: 10950 ft: 16727 corp: 7/193b lim: 32 exec/s: 340 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:43.155 [2024-05-15 05:32:32.988582] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x200000000, 0x200000000) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:43.155 [2024-05-15 05:32:32.988606] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000000, 0x200000000) offset=0x2b flags=0x3: Invalid argument 00:07:43.155 [2024-05-15 05:32:32.988616] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:43.155 [2024-05-15 05:32:32.988633] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:43.155 [2024-05-15 05:32:32.989584] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000000, 0x200000000) flags=0: No such file or directory 00:07:43.155 [2024-05-15 05:32:32.989606] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:43.155 [2024-05-15 05:32:32.989622] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:43.155 #341 NEW cov: 10950 ft: 17123 corp: 8/225b lim: 32 exec/s: 341 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:43.155 [2024-05-15 05:32:33.167915] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x200000000, 0x200000000) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:43.155 [2024-05-15 05:32:33.167937] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000000, 0x200000000) offset=0x2b flags=0x3: Invalid argument 00:07:43.155 [2024-05-15 05:32:33.167948] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:43.155 [2024-05-15 05:32:33.167969] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:43.155 [2024-05-15 05:32:33.168948] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000000, 0x200000000) flags=0: No such file or directory 00:07:43.156 [2024-05-15 05:32:33.168966] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:43.156 [2024-05-15 05:32:33.168981] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:43.414 #342 NEW cov: 10950 ft: 17267 corp: 9/257b lim: 32 exec/s: 342 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:07:43.414 [2024-05-15 05:32:33.345164] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 35184372088832 > max 8796093022208 00:07:43.414 [2024-05-15 05:32:33.345187] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000000, 0x200200000000) offset=0x2b flags=0x3: No space left on device 00:07:43.414 [2024-05-15 05:32:33.345197] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:07:43.414 [2024-05-15 05:32:33.345213] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:43.414 [2024-05-15 05:32:33.346170] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000000, 0x200200000000) flags=0: No such file or directory 00:07:43.414 [2024-05-15 05:32:33.346189] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:43.414 [2024-05-15 05:32:33.346205] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:43.672 #343 NEW cov: 10957 ft: 17470 corp: 10/289b lim: 32 exec/s: 343 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:07:43.673 [2024-05-15 05:32:33.524876] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [0x200000000, 0x200000000) fd=325 offset=0x2b prot=0x3: Invalid argument 00:07:43.673 [2024-05-15 05:32:33.524898] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x200000000, 0x200000000) offset=0x2b flags=0x3: Invalid argument 00:07:43.673 [2024-05-15 05:32:33.524909] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:07:43.673 [2024-05-15 05:32:33.524926] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:07:43.673 [2024-05-15 05:32:33.525867] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x200000000, 0x200000000) flags=0: No such file or directory 00:07:43.673 [2024-05-15 05:32:33.525886] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:07:43.673 [2024-05-15 05:32:33.525903] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:07:43.673 #344 NEW cov: 10957 ft: 17593 corp: 11/321b lim: 32 exec/s: 172 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:43.673 #344 DONE cov: 10957 ft: 17593 corp: 11/321b lim: 32 exec/s: 172 rss: 74Mb 00:07:43.673 Done 344 runs in 2 second(s) 00:07:43.673 [2024-05-15 05:32:33.654573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:43.933 [2024-05-15 05:32:33.704554] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:43.933 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:43.933 05:32:33 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:43.933 [2024-05-15 05:32:33.936025] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:43.933 [2024-05-15 05:32:33.936092] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279887 ] 00:07:44.192 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.192 [2024-05-15 05:32:34.007124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.192 [2024-05-15 05:32:34.079085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.452 [2024-05-15 05:32:34.242863] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:44.452 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.452 INFO: Seed: 925864463 00:07:44.452 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:44.452 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:44.452 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:44.452 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.452 #2 INITED exec/s: 0 rss: 64Mb 00:07:44.452 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.452 This may also happen if the target rejected all inputs we tried so far 00:07:44.452 [2024-05-15 05:32:34.312305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:44.452 [2024-05-15 05:32:34.383628] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.452 [2024-05-15 05:32:34.383664] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:44.968 NEW_FUNC[1/648]: 0x484020 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:44.968 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:44.968 #9 NEW cov: 10916 ft: 10470 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:44.968 [2024-05-15 05:32:34.885610] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:44.969 [2024-05-15 05:32:34.885653] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.227 #20 NEW cov: 10933 ft: 14059 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:45.227 [2024-05-15 05:32:35.083867] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.227 [2024-05-15 05:32:35.083896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.227 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:45.227 #31 NEW cov: 10950 ft: 15863 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:45.486 [2024-05-15 05:32:35.287336] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.486 [2024-05-15 05:32:35.287366] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.486 #32 NEW cov: 10950 ft: 16636 corp: 5/53b lim: 13 exec/s: 32 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:07:45.486 [2024-05-15 05:32:35.492195] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.486 [2024-05-15 05:32:35.492226] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:45.744 #33 NEW cov: 10950 ft: 16809 corp: 6/66b lim: 13 exec/s: 33 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:45.745 [2024-05-15 05:32:35.690362] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:45.745 [2024-05-15 05:32:35.690398] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.003 #44 NEW cov: 10950 ft: 17083 corp: 7/79b lim: 13 exec/s: 44 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:46.003 [2024-05-15 05:32:35.888352] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.003 [2024-05-15 05:32:35.888386] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.003 #55 NEW cov: 10950 ft: 17316 corp: 8/92b lim: 13 exec/s: 55 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:46.262 [2024-05-15 05:32:36.084449] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.262 [2024-05-15 05:32:36.084480] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.262 #61 NEW cov: 10957 ft: 17422 corp: 9/105b lim: 13 exec/s: 61 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:46.521 [2024-05-15 05:32:36.291109] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:46.521 [2024-05-15 05:32:36.291139] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:46.521 #67 NEW cov: 10957 ft: 17742 corp: 10/118b lim: 13 exec/s: 33 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:07:46.521 #67 DONE cov: 10957 ft: 17742 corp: 10/118b lim: 13 exec/s: 33 rss: 74Mb 00:07:46.521 Done 67 runs in 2 second(s) 00:07:46.521 [2024-05-15 05:32:36.428571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:46.521 [2024-05-15 05:32:36.481085] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:46.780 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:46.780 05:32:36 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:46.780 [2024-05-15 05:32:36.707804] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:46.780 [2024-05-15 05:32:36.707873] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280424 ] 00:07:46.780 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.780 [2024-05-15 05:32:36.779335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.039 [2024-05-15 05:32:36.851636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.039 [2024-05-15 05:32:37.019525] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:47.039 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.039 INFO: Seed: 3704866652 00:07:47.039 INFO: Loaded 1 modules (350164 inline 8-bit counters): 350164 [0x28e034c, 0x2935b20), 00:07:47.039 INFO: Loaded 1 PC tables (350164 PCs): 350164 [0x2935b20,0x2e8d860), 00:07:47.039 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:47.039 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.039 #2 INITED exec/s: 0 rss: 64Mb 00:07:47.039 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.039 This may also happen if the target rejected all inputs we tried so far 00:07:47.310 [2024-05-15 05:32:37.088663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:47.310 [2024-05-15 05:32:37.116424] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.310 [2024-05-15 05:32:37.116459] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.609 NEW_FUNC[1/648]: 0x484d10 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:47.609 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:47.609 #6 NEW cov: 10905 ft: 10814 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 4 CrossOver-InsertRepeatedBytes-EraseBytes-InsertRepeatedBytes- 00:07:47.609 [2024-05-15 05:32:37.538339] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.609 [2024-05-15 05:32:37.538384] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.609 #7 NEW cov: 10922 ft: 13598 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:47.868 [2024-05-15 05:32:37.653154] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.868 [2024-05-15 05:32:37.653189] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.868 #8 NEW cov: 10922 ft: 14885 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:47.868 [2024-05-15 05:32:37.768188] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:47.868 [2024-05-15 05:32:37.768222] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:47.868 #9 NEW cov: 10922 ft: 15192 corp: 5/37b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:48.127 [2024-05-15 05:32:37.893118] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.127 [2024-05-15 05:32:37.893153] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.127 NEW_FUNC[1/1]: 0x19e7c40 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:48.127 #10 NEW cov: 10939 ft: 15780 corp: 6/46b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.127 [2024-05-15 05:32:38.006916] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.127 [2024-05-15 05:32:38.006948] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.127 #11 NEW cov: 10939 ft: 15840 corp: 7/55b lim: 9 exec/s: 11 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:48.127 [2024-05-15 05:32:38.120757] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.127 [2024-05-15 05:32:38.120789] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.385 #12 NEW cov: 10939 ft: 15956 corp: 8/64b lim: 9 exec/s: 12 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:48.386 [2024-05-15 05:32:38.234606] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.386 [2024-05-15 05:32:38.234639] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.386 #13 NEW cov: 10939 ft: 16454 corp: 9/73b lim: 9 exec/s: 13 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.386 [2024-05-15 05:32:38.349484] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.386 [2024-05-15 05:32:38.349517] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.645 #14 NEW cov: 10939 ft: 16470 corp: 10/82b lim: 9 exec/s: 14 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.645 [2024-05-15 05:32:38.462350] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.645 [2024-05-15 05:32:38.462387] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.645 #15 NEW cov: 10939 ft: 16627 corp: 11/91b lim: 9 exec/s: 15 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:07:48.645 [2024-05-15 05:32:38.575197] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.645 [2024-05-15 05:32:38.575232] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.645 #18 NEW cov: 10939 ft: 16653 corp: 12/100b lim: 9 exec/s: 18 rss: 74Mb L: 9/9 MS: 3 EraseBytes-InsertByte-CMP- DE: "\000\007"- 00:07:48.903 [2024-05-15 05:32:38.689202] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.904 [2024-05-15 05:32:38.689234] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.904 #19 NEW cov: 10939 ft: 16705 corp: 13/109b lim: 9 exec/s: 19 rss: 74Mb L: 9/9 MS: 1 CopyPart- 00:07:48.904 [2024-05-15 05:32:38.804058] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.904 [2024-05-15 05:32:38.804091] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:48.904 #20 NEW cov: 10946 ft: 16746 corp: 14/118b lim: 9 exec/s: 20 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:48.904 [2024-05-15 05:32:38.917983] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:48.904 [2024-05-15 05:32:38.918016] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.162 #21 NEW cov: 10946 ft: 16881 corp: 15/127b lim: 9 exec/s: 21 rss: 74Mb L: 9/9 MS: 1 PersAutoDict- DE: "\000\007"- 00:07:49.162 [2024-05-15 05:32:39.032988] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:49.162 [2024-05-15 05:32:39.033022] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:49.162 #27 NEW cov: 10946 ft: 16896 corp: 16/136b lim: 9 exec/s: 13 rss: 74Mb L: 9/9 MS: 1 ChangeBit- 00:07:49.162 #27 DONE cov: 10946 ft: 16896 corp: 16/136b lim: 9 exec/s: 13 rss: 74Mb 00:07:49.162 ###### Recommended dictionary. ###### 00:07:49.162 "\000\007" # Uses: 2 00:07:49.162 ###### End of recommended dictionary. ###### 00:07:49.162 Done 27 runs in 2 second(s) 00:07:49.163 [2024-05-15 05:32:39.115570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:49.163 [2024-05-15 05:32:39.169188] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:49.422 05:32:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:49.422 05:32:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.422 05:32:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.422 05:32:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:49.422 00:07:49.422 real 0m19.345s 00:07:49.422 user 0m26.806s 00:07:49.422 sys 0m1.789s 00:07:49.422 05:32:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:49.422 05:32:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:49.422 ************************************ 00:07:49.422 END TEST vfio_fuzz 00:07:49.422 ************************************ 00:07:49.422 05:32:39 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:49.422 00:07:49.422 real 1m23.658s 00:07:49.422 user 2m7.189s 00:07:49.422 sys 0m9.035s 00:07:49.422 05:32:39 llvm_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:49.422 05:32:39 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:49.422 ************************************ 00:07:49.422 END TEST llvm_fuzz 00:07:49.422 ************************************ 00:07:49.682 05:32:39 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:07:49.682 05:32:39 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:07:49.682 05:32:39 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:07:49.682 05:32:39 -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:49.682 05:32:39 -- common/autotest_common.sh@10 -- # set +x 00:07:49.682 05:32:39 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:07:49.682 05:32:39 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:07:49.682 05:32:39 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:07:49.682 05:32:39 -- common/autotest_common.sh@10 -- # set +x 00:07:56.250 INFO: APP EXITING 00:07:56.250 INFO: killing all VMs 00:07:56.250 INFO: killing vhost app 00:07:56.250 INFO: EXIT DONE 00:07:58.155 Waiting for block devices as requested 00:07:58.155 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:58.156 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:58.156 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:58.414 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:58.414 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:58.414 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:58.673 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:58.673 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:58.673 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:58.932 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:58.932 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:58.932 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:58.932 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:59.191 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:59.191 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:59.191 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:59.448 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:02.759 Cleaning 00:08:02.759 Removing: /dev/shm/spdk_tgt_trace.pid3244683 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3242222 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3243479 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3244683 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3245383 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3246241 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3246511 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3247628 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3247643 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3248055 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3248369 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3248694 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3249031 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3249355 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3249642 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3249927 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3250244 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3251415 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3254824 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3255142 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3255519 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3255683 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3256253 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3256462 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3256906 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3257095 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3257397 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3257541 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3257704 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3257962 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3258345 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3258627 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3258912 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3259181 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3259435 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3259563 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3259634 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3259917 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3260202 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3260487 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3260768 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3261052 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3261339 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3261576 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3261804 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3262030 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3262247 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3262517 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3262799 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3263083 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3263371 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3263651 00:08:02.759 Removing: /var/run/dpdk/spdk_pid3263940 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3264229 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3264514 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3264767 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3265007 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3265160 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3265501 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3266224 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3266515 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3267046 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3267575 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3267876 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3268401 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3268936 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3269235 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3269758 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3270272 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3270580 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3271110 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3271578 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3271933 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3272468 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3272886 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3273289 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3273821 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3274232 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3274644 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3275179 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3275526 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3276006 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3276538 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3276838 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3277445 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3277978 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3278516 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3278901 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3279344 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3279887 00:08:03.018 Removing: /var/run/dpdk/spdk_pid3280424 00:08:03.018 Clean 00:08:03.290 05:32:53 -- common/autotest_common.sh@1448 -- # return 0 00:08:03.290 05:32:53 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:08:03.290 05:32:53 -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:03.290 05:32:53 -- common/autotest_common.sh@10 -- # set +x 00:08:03.290 05:32:53 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:08:03.290 05:32:53 -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:03.290 05:32:53 -- common/autotest_common.sh@10 -- # set +x 00:08:03.290 05:32:53 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:03.290 05:32:53 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:03.290 05:32:53 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:03.290 05:32:53 -- spdk/autotest.sh@387 -- # hash lcov 00:08:03.290 05:32:53 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:03.290 05:32:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:03.290 05:32:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:03.290 05:32:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.290 05:32:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.290 05:32:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.290 05:32:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.290 05:32:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.290 05:32:53 -- paths/export.sh@5 -- $ export PATH 00:08:03.290 05:32:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.290 05:32:53 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:03.290 05:32:53 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:03.290 05:32:53 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715743973.XXXXXX 00:08:03.290 05:32:53 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715743973.6p0wzs 00:08:03.290 05:32:53 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:03.290 05:32:53 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:08:03.291 05:32:53 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:03.291 05:32:53 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:03.291 05:32:53 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:03.291 05:32:53 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:03.291 05:32:53 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:03.291 05:32:53 -- common/autotest_common.sh@10 -- $ set +x 00:08:03.291 05:32:53 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:03.291 05:32:53 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:03.291 05:32:53 -- pm/common@17 -- $ local monitor 00:08:03.291 05:32:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.291 05:32:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.291 05:32:53 -- pm/common@21 -- $ date +%s 00:08:03.291 05:32:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.291 05:32:53 -- pm/common@21 -- $ date +%s 00:08:03.291 05:32:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:03.291 05:32:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715743973 00:08:03.291 05:32:53 -- pm/common@25 -- $ sleep 1 00:08:03.291 05:32:53 -- pm/common@21 -- $ date +%s 00:08:03.291 05:32:53 -- pm/common@21 -- $ date +%s 00:08:03.291 05:32:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715743973 00:08:03.291 05:32:53 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715743973 00:08:03.291 05:32:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715743973 00:08:03.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715743973_collect-vmstat.pm.log 00:08:03.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715743973_collect-cpu-load.pm.log 00:08:03.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715743973_collect-cpu-temp.pm.log 00:08:03.548 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715743973_collect-bmc-pm.bmc.pm.log 00:08:04.485 05:32:54 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:04.485 05:32:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:08:04.485 05:32:54 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.485 05:32:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:04.485 05:32:54 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:08:04.485 05:32:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:04.485 05:32:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:04.485 05:32:54 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:04.485 05:32:54 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:04.485 05:32:54 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:04.485 05:32:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:04.485 05:32:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:04.485 05:32:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:04.485 05:32:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:04.485 05:32:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.485 05:32:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:04.486 05:32:54 -- pm/common@44 -- $ pid=3287348 00:08:04.486 05:32:54 -- pm/common@50 -- $ kill -TERM 3287348 00:08:04.486 05:32:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.486 05:32:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:04.486 05:32:54 -- pm/common@44 -- $ pid=3287350 00:08:04.486 05:32:54 -- pm/common@50 -- $ kill -TERM 3287350 00:08:04.486 05:32:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.486 05:32:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:04.486 05:32:54 -- pm/common@44 -- $ pid=3287355 00:08:04.486 05:32:54 -- pm/common@50 -- $ kill -TERM 3287355 00:08:04.486 05:32:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.486 05:32:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:04.486 05:32:54 -- pm/common@44 -- $ pid=3287414 00:08:04.486 05:32:54 -- pm/common@50 -- $ sudo -E kill -TERM 3287414 00:08:04.486 + [[ -n 3137141 ]] 00:08:04.486 + sudo kill 3137141 00:08:04.497 [Pipeline] } 00:08:04.517 [Pipeline] // stage 00:08:04.524 [Pipeline] } 00:08:04.544 [Pipeline] // timeout 00:08:04.551 [Pipeline] } 00:08:04.573 [Pipeline] // catchError 00:08:04.580 [Pipeline] } 00:08:04.602 [Pipeline] // wrap 00:08:04.610 [Pipeline] } 00:08:04.629 [Pipeline] // catchError 00:08:04.642 [Pipeline] stage 00:08:04.645 [Pipeline] { (Epilogue) 00:08:04.664 [Pipeline] catchError 00:08:04.666 [Pipeline] { 00:08:04.684 [Pipeline] echo 00:08:04.685 Cleanup processes 00:08:04.695 [Pipeline] sh 00:08:04.980 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.980 3198708 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715743636 00:08:04.980 3198748 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715743636 00:08:04.981 3287596 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:04.981 3288468 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:04.996 [Pipeline] sh 00:08:05.280 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:05.280 ++ grep -v 'sudo pgrep' 00:08:05.280 ++ awk '{print $1}' 00:08:05.280 + sudo kill -9 3198708 3198748 3287596 00:08:05.294 [Pipeline] sh 00:08:05.580 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:05.580 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:05.580 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:06.975 [Pipeline] sh 00:08:07.257 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:07.257 Artifacts sizes are good 00:08:07.274 [Pipeline] archiveArtifacts 00:08:07.283 Archiving artifacts 00:08:07.358 [Pipeline] sh 00:08:07.671 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:07.686 [Pipeline] cleanWs 00:08:07.696 [WS-CLEANUP] Deleting project workspace... 00:08:07.696 [WS-CLEANUP] Deferred wipeout is used... 00:08:07.703 [WS-CLEANUP] done 00:08:07.705 [Pipeline] } 00:08:07.725 [Pipeline] // catchError 00:08:07.739 [Pipeline] sh 00:08:08.024 + logger -p user.info -t JENKINS-CI 00:08:08.034 [Pipeline] } 00:08:08.050 [Pipeline] // stage 00:08:08.058 [Pipeline] } 00:08:08.078 [Pipeline] // node 00:08:08.085 [Pipeline] End of Pipeline 00:08:08.126 Finished: SUCCESS